Legal Documents

AI Ethics Code of SCANDIC FINANCE GROUP LIMITED and the SCANDIC Brand Ecosystem

0. Company information and cooperation structure

This Code of Ethics for Artificial Intelligence (hereinafter referred to as the "AI Ethics Code") applies to:

by Scandic Banking Hong Kong

SCANDIC FINANCE GROUP LIMITED

Room 10, Unit A, 7/F
Harbour Sky, 28 Sze Shan Street
Yau Tong, Hong Kong, SAR-PRC

Head office telephone number in Switzerland, Zurich: +41 44 7979 99 – 85
Email: Office@ScandicFinance.Global

Commercial register

In cooperation with:

SCANDIC ASSETS FZCO

Dubai Silicon Oasis DDP Building A1/A2
Dubai, 342001, United Arab Emirates

Telephone: +971 56 929 86 – 90
Email: Info@ScandicAssets.dev

Commercial register

in cooperation with:

SCANDIC TRUST GROUP LLC

IQ Business Centre, Bolsunovska Street 13 – 15
01014 Kyiv, Ukraine

Headquarters telephone number United Kingdom of Great Britain and Northern Ireland, London: +44 7470 86 92 – 60
Email: Info@ScandicTrust.com

Commercial register extract

in cooperation with:

LEGIER BETEILIGUNGS GMBH

Kurfürstendamm 14
10719 Berlin, Federal Republic of Germany

Commercial register number Berlin: HRB 57837Telephone: +49 (0) 30 9921134 – 69
Email: Office@LegierGroup.com

Commercial register
Ethical AI Governance
Ethical AI Governance
Legal notice: SCANDIC ASSETS FZCO, LEGIER Beteiligungs Gesellschaft mit beschränkter Haftung and SCANDIC TRUST GROUP LLC act as non-operational service providers. All operational and responsible activities are carried out by SCANDIC FINANCE GROUP LIMITED, Hong Kong, Special Administrative Region of the People's Republic of China.

Applicability in the brand ecosystem:

This AI Code of Ethics applies to the SCANDIC brand ecosystem, in particular to the following brands and services:

as well as for all structures held or supported by SCANDIC FINANCE GROUP LIMITED.

Table of contents

  1. Preamble and scope
  2. Core values and guiding principles
  3. Governance and responsibilities (Artificial Intelligence Ethics Committee, accountability model)
  4. Legal and regulatory framework (European Union Artificial Intelligence Regulation, General Data Protection Regulation, Digital Services Act, copyright law, commercial law)
  5. Risk classification and assessment of the impact of artificial intelligence
  6. Data ethics and data protection (legal basis, data protection impact assessment, internet identifiers, transfer to third countries)
  7. Life cycle of models and data (model life cycle, data cards, model cards)
  8. Transparency, explainability and user notices
  9. Human oversight and supervisory duties
  10. Security, robustness and adversarial testing
  11. Supply chain, human rights and fair labour
  12. Bias control, fairness and inclusion
  13. Generative artificial intelligence, proof of origin and labelling
  14. Content, moderation and processes under the Digital Services Act
  15. Domain-specific use in the SCANDIC brand ecosystem
  16. Third parties, procurement and risk management of service providers
  17. Operation, monitoring, emergency and recovery plans
  18. Incidents and remedies (ethics, data protection, security)
  19. Metrics, key performance indicators and safeguards
  20. Training, awareness and cultural change
  21. Implementation and roadmap (0 – 6 / 6 – 12 / 12 – 24 months)
  22. Roles and responsibility matrix
  23. Checklists (brief assessment of impact, data release, release for commissioning)
  24. Forms and templates (model card, data card, incident report)
  25. Glossary and references

1. Preamble and scope

1.1

SCANDIC FINANCE GROUP LIMITED recognises the profound importance of artificial intelligence systems for financial services, media, health, mobility, real estate, data processing and digital infrastructure. The aim of this AI Ethics Code is to create a binding framework for the responsible, legally compliant and human-centred use of artificial intelligence.

1.2

This Code applies worldwide to all artificial intelligence systems operated or managed by SCANDIC FINANCE GROUP LIMITED, including those developed, operated or used in cooperation with SCANDIC ASSETS FZCO, SCANDIC TRUST GROUP LLC or LEGIER Beteiligungs Gesellschaft mit beschränkter Haftung.

1.3

The Code is binding for the following groups:

  • Employees and managers of SCANDIC FINANCE GROUP LIMITED and its affiliated companies,
  • External service providers, processors and suppliers,
  • Partner companies within the SCANDIC brand ecosystem,
  • Other third parties who develop, operate or provide artificial intelligence systems on behalf of or in the interests of SCANDIC FINANCE GROUP LIMITED.

1.4

The AI Ethics Code supplements existing guidelines, in particular:

  • Group Data Protection Policy,
  • Guideline on Digital Services and Platform Processes,
  • Human Rights Due Diligence and Supply Chain Policy,
  • Corporate Governance and Compliance Policy,
  • Sustainability Policy,
  • Declaration on Combating Modern Forms of Slavery.

In the event of a conflict, the stricter and more protective regulation for those affected shall always apply.

2. Core values and guiding principles

2.1 Human dignity and fundamental rights

Artificial intelligence serves people, not the other way around. All artificial intelligence systems and applications must respect human dignity, fundamental rights and personal rights.

2.2 Legal compliance

SCANDIC FINANCE GROUP LIMITED is committed to complying with all relevant national and international standards. This includes, in particular, the European Union's Artificial Intelligence Regulation, the European Union's General Data Protection Regulation, the European Union's Digital Services Act, relevant copyrights, ancillary copyrights and personal rights, as well as industry-specific regulations.

2.3 Responsibility and accountability

For each artificial intelligence system, a clearly designated responsible person is appointed who is accountable for the purpose, risk assessment, documentation and ongoing monitoring.

2.4 Proportionality

The design and use of artificial intelligence must always be proportionate. The higher the risk to data subjects, the stricter the requirements for justification, transparency, oversight and safeguards.

2.5 Transparency and explainability

Users should be informed when they interact with artificial intelligence systems or when content has been generated or significantly influenced by artificial intelligence. The functioning of the systems must be explained in understandable language, insofar as this is compatible with the protection of trade secrets and security interests.

2.6 Fairness and inclusion

Artificial intelligence systems must not create or reinforce unjustified disadvantages. Particular attention must be paid to vulnerable groups and to avoiding structural discrimination.

2.7 Security and resilience

Artificial intelligence systems must be robust against malfunctions, attacks and manipulation. Mechanisms for error detection, safe shutdown and recovery shall be provided.

2.8 Sustainability

The development and operation of artificial intelligence systems must take environmental, social and corporate governance aspects into account. Energy-efficient processes, resource-saving infrastructures and the responsible use of computing capacities are preferred.

3. Governance and responsibilities

3.1 Committee for Ethics in Artificial Intelligence

SCANDIC FINANCE GROUP LIMITED is establishing an Ethics Committee for Artificial Intelligence. This committee is interdisciplinary and includes representatives from the following areas, among others:

  • Technology and development,
  • Legal and compliance,
  • Data protection,
  • Information security,
  • Editorial and product management,
  • Human resources,
  • Relevant business areas such as financial services, health and media.

Tasks of the Committee for Ethics in Artificial Intelligence:

  • Updating the AI Ethics Code and associated guidelines,
  • Deciding on fundamental issues relating to the use of artificial intelligence,
  • Approval of high-risk systems,
  • Evaluation of incidents with ethical relevance,
  • Annual review of the overall risk profile of artificial intelligence systems within the company.

3.2 Responsibility model

A responsibility model is defined for all activities in the life cycle of artificial intelligence systems. It specifies:

  • who is responsible for execution,
  • who bears ultimate responsibility,
  • who is to be involved in an advisory capacity,
  • who is to be informed and in what form.

3.3 Documentation

The structure of the committee, the role descriptions and the decision-making processes of the Committee for Ethics in Artificial Intelligence are documented in writing. Changes require a formal resolution and are communicated transparently.

4. Legal and regulatory framework

4.1 European Union Regulation on Artificial Intelligence

SCANDIC FINANCE GROUP LIMITED aligns its internal procedures with the European Union's Regulation on Artificial Intelligence. This includes, among other things:

  • Classification of systems into prohibited practices, high-risk systems, limited-risk systems and minimal-risk systems
  • Compliance with requirements for quality management, documentation, logging and human oversight,
  • Technical and organisational measures to ensure security, transparency and traceability.

4.2 European Union General Data Protection Regulation

All processing relevant to data protection law must be aligned with the European Union's General Data Protection Regulation. This includes, in particular:

  • Definition and documentation of legal bases,
  • Consideration of special categories of personal data,
  • Implementation of data protection through design and data protection-friendly default settings,
  • Carrying out data protection impact assessments,
  • Safeguarding the rights of data subjects.

4.3 European Union Digital Services Act

Digital services related to the European Union are subject to the provisions of the European Union Digital Services Act. These include, in particular:

  • Clear reporting channels for illegal content,
  • Complaint procedures and appeal options,
  • transparency reports and risk-based assessments.

4.4 Copyright, ancillary copyright and personal rights

When using artificial intelligence to create, process or distribute content, copyright, ancillary copyright and personal rights are taken into account comprehensively. Licence chains are documented and verified.

4.5 Industry-specific standards

Industry-specific standards, such as financial market regulation, health law, aviation law, maritime law, telecommunications law and media law, must also be observed. This includes relevant supervisory requirements and professional standards.

5. Risk classification and assessment of the impact of artificial intelligence

5.1 Risk classification

Each artificial intelligence system is assigned a risk class before it is introduced:

  • Prohibited practices: systems that may not be operated
  • High-risk systems: systems with a significant impact on safety, health, fundamental rights or living conditions,
  • Systems with limited risks: systems with transparency requirements and manageable risk potential,
  • Systems with minimal risks: simple support functions with a low risk profile.

5.2 Impact assessment

A structured assessment of the impact of artificial intelligence includes:

  • Description of the purpose and functions,
  • Analysis of the groups affected,
  • Legal and ethical assessment,
  • Identification of risks in the areas of law, ethics, security, bias and the environment,
  • Definition and documentation of protective measures,
  • Decision on approval, restriction or rejection of the system.

5.3 Recurring reassessment

Artificial intelligence systems are subject to reassessment in the event of significant changes and at regular intervals. High-risk systems are reviewed at least once a year.

6. Data ethics and data protection

6.1 Data minimisation and purpose limitation

Only data that is absolutely necessary for the fulfilment of the respective purpose is processed. Any change of purpose requires a new legal review and, if necessary, notification of the data subjects.

6.2 Transparency towards data subjects

Data subjects shall be informed in a clear and comprehensible manner about the nature, scope, purpose and legal basis of processing within the framework of artificial intelligence systems. This also includes information on automated decision-making, profiling and the significance and intended effects for the data subject.

6.3 Technical and organisational measures

SCANDIC FINANCE GROUP LIMITED implements appropriate technical and organisational measures to ensure the confidentiality, integrity and availability of data. This includes in particular:

  • Access and rights concepts,
  • Encryption, pseudonymisation and anonymisation,
  • Logging of accesses and changes,
  • Separate data storage for development, test and production environments.

6.4 Internet identifiers and tracking

The use of internet identifiers such as cookies and similar technologies in connection with artificial intelligence systems is based on the principle of data minimisation. Consent is obtained and documented where necessary, and options for revoking consent are made simple.

6.5 Transfer to third countries

If personal data is transferred to countries outside the European Economic Area, appropriate safeguards are put in place. The actual legal situation in the recipient country is assessed and, if necessary, compensated for by additional protective mechanisms.

7. Life cycle of models and data

7.1 Life cycle of data

The data life cycle comprises:

  • Collection and procurement,
  • Preparation and cleansing,
  • labelling and quality assurance,
  • Use and analysis,
  • archiving and deletion.

Each phase is documented and responsibilities are assigned.

7.2 Life cycle of the models

The model life cycle comprises:

  • Problem definition and goal setting,
  • Selection of model architecture,
  • Training and fine-tuning,
  • Testing and validation,
  • Release and commissioning,
  • Monitoring and maintenance,
  • Adaptation, retraining or decommissioning.

7.3 Data cards and model cards

Data cards and model cards are created for central data sets and models. These contain information on origin, representativeness, quality characteristics, known distortions, intended uses, limitations and risks, as well as the persons responsible.

8. Transparency, explainability and user notices

8.1 Labelling of artificial intelligence systems

Artificial intelligence systems are clearly labelled for users. This can take the form of notices, symbols or brief explanations.

8.2 Explainable results

Where possible, understandable explanations shall be provided for decision-supporting or decision-replacing systems. These shall contain information about which factors have contributed significantly to the result.

8.3 Feedback and correction mechanisms

Users are given easily accessible options for questioning results, reporting errors and suggesting corrections. Incoming notifications are recorded, reviewed and responded to in a structured manner.

9. Human oversight and supervisory duties

9.1 Human ultimate responsibility

In all critical areas, the final decision-making responsibility remains with humans. Artificial intelligence systems must not be allowed to make independent decisions with serious consequences for those affected in an uncontrolled manner.

9.2 Supervision and intervention options

Mechanisms are established that enable responsible persons to:

  • check results,
  • stop or shut down systems,
  • make alternative decisions.

9.3 Multiple-eyes principle

In sensitive areas, such as editorial reporting, financial decisions and healthcare, a multiple-eyes principle is envisaged. Decisions are reviewed by several qualified persons.

10. Security, robustness and adversarial testing

10.1 Threat analyses

Threat analyses are performed for artificial intelligence systems, taking into account attacks on input data, models and outputs.

10.2 Adversarial testing

Systems are regularly subjected to simulated attacks and abuse scenarios in order to identify and remedy vulnerabilities.

10.3 Security measures

Technical protective measures include, among other things:

  • Input and output checks,
  • Limitation of queries and resources,
  • Monitoring of suspicious patterns,
  • Emergency mechanisms for rapid response.

11. Supply chain, human rights and fair labour

11.1 Human rights due diligence

SCANDIC FINANCE GROUP LIMITED is committed to respecting human rights throughout its supply chain. Service providers are assessed for their compliance with fundamental labour, social and environmental standards.

11.2 Modern forms of slavery

Any form of forced labour, child labour or human trafficking is firmly rejected. Suspected cases are investigated and, if necessary, business relationships are terminated.

11.3 Protection of whistleblowers

People who report abuses in good faith are protected from discrimination. The confidentiality of their identity is maintained to the extent legally possible.

12. Bias control, fairness and inclusion

12.1 Review of data sets

Data sets are analysed to identify biases that could lead to unfair treatment of certain groups.

12.2 Fair use of models

Models are designed and tested with the aim of achieving fair results. Differences in the degree of impact on different groups are documented and, where possible, reduced.

12.3 Inclusive design

User interfaces and communication channels are designed to be inclusive. Accessibility and multilingualism are promoted to enable access for all groups.

13. Generative artificial intelligence, proof of origin and labelling

13.1 Labelling of generated content

Content that has been created predominantly by generative artificial intelligence processes is identified as such, particularly in journalistic contexts, in advertising, and in financial or health information.

13.2 Proof of origin

Where technically possible, watermarks, signatures or metadata are used to make the origin of content traceable.

13.3 Third-party rights

When training and using generative systems, the rights of authors and holders of ancillary copyrights shall be respected. Unauthorised use shall be refrained from.

14. Content, moderation and processes under the Digital Services Act

14.1 Reporting channels

SCANDIC FINANCE GROUP LIMITED has set up easily accessible reporting channels for illegal or abusive content. Reports are reviewed and processed promptly.

14.2 Complaints procedure

There are complaint and appeal procedures in place that allow users to question and review decisions about content or accounts.

14.3 Transparency reports

Transparency reports are published at regular intervals, providing information on how reported content is handled, the use of artificial intelligence systems for moderation, and the measures taken to mitigate risk.

15. Domain-specific use in the SCANDIC brand ecosystem

15.1 News and media

In news and media offerings, artificial intelligence systems serve as tools for research, translation, summarisation and moderation. Editorial responsibility remains with journalistically trained individuals.

15.2 Data and data centre services

SCANDIC DATA provides artificial intelligence infrastructures that ensure client separation, encryption, key management and comprehensive monitoring.

15.3 Health

In healthcare applications, artificial intelligence systems support professionals in diagnosis and therapy decisions, but do not replace them. Final decisions are made by qualified healthcare professionals.

15.4 Aviation and maritime

In aviation and yacht services, artificial intelligence systems are used to optimise routes, maintenance and customer experience. Safety-related decisions remain the responsibility of pilots and captains.

15.5 Real estate

Real estate applications use valuation models that take transparent criteria into account. Discrimination in renting or selling is actively avoided.

15.6 Financial services, payment transactions, trade, trust and digital assets

Artificial intelligence systems support the detection of fraud, compliance with anti-money laundering and counter-terrorist financing regulations, risk management and market surveillance. Decisions with a significant impact on customers are justified in a comprehensible manner.

15.7 Mobility and vehicles

Personalised offers and assistance functions that respect privacy and security are used in mobility and vehicle services. Movement data is only used under strict protection conditions.

16. Third parties, procurement and risk management of service providers

16.1 Review prior to cooperation

Before collaborating with third-party providers of artificial intelligence components, their security levels, data protection standards, data processing locations, certifications and subcontractor structures are reviewed.

16.2 Contractual provisions

Contracts with third-party providers contain provisions on:

  • Responsibilities,
  • inspection and audit rights,
  • Transparency obligations in the event of changes and incidents,
  • performance indicators for service quality,
  • termination rights for good cause.

16.3 Ongoing monitoring

The performance and risk profile of third-party providers are monitored regularly. Significant deviations lead to corrective measures or, if necessary, to the termination of the cooperation.

17. Operation, monitoring, emergency and recovery plans

17.1 Operation and monitoring

Artificial intelligence systems in productive use are continuously monitored. This includes:

  • technical stability and availability,
  • result quality and error rates,
  • anomalies in usage behaviour,
  • indications of security-related incidents.

17.2 Emergency management

Plans are in place for dealing with failures and security incidents. These include:

  • Defined recovery times,
  • Communication channels,
  • escalation levels,
  • regular exercises and updates.

17.3 Configuration and secret management

Configurations and confidential information are centrally managed, protected and regularly reviewed. Access is strictly granted according to the principle of least privilege.

18. Incidents and remedies (ethics, data protection, security)

18.1 Types of incidents

Incidents are divided into at least the following categories:

  • ethical incidents,
  • data protection incidents,
  • security incidents.

18.2 Reporting and processing procedures

There are clear reporting chains and processing procedures for all types of incidents. These regulate:

  • who receives incidents,
  • how quickly a response is required,
  • which departments are to be involved,
  • how the root cause analysis is carried out.

18.3 Documentation and learning

Every incident is documented. Lessons are learned from incidents and incorporated into guidelines, training courses and technical measures.

19. Metrics, key performance indicators and safeguards

19.1 Metrics for control

SCANDIC FINANCE GROUP LIMITED defines metrics to monitor compliance with this AI Code of Ethics. These include, for example:

  • Number and proportion of artificial intelligence systems evaluated
  • Processing times for complaints,
  • Frequency and severity of incidents,
  • Training rates.

19.2 Metrics for fairness and quality

Appropriate metrics are defined to monitor the fairness, quality and stability of models. Differences in error rates between groups are identified and evaluated.

19.3 Sustainability indicators

Key figures on energy consumption, utilisation of computing resources and other environmental aspects are collected and included in decisions on the selection of models and infrastructures.

20. Training, awareness and cultural change

20.1 Mandatory training

Employees in relevant roles regularly participate in training courses that teach the basics, opportunities and risks of artificial intelligence. In addition, special training courses are offered on data protection, information security and industry-specific topics.

20.2 Awareness-raising measures

Guidelines, internal communication campaigns, specialist forums and exchange formats are used to raise awareness of ethical issues relating to artificial intelligence.

20.3 Role of managers

Managers serve as role models. They are responsible for actively demanding and promoting compliance with this AI Code of Ethics.

21. Implementation and roadmap

21.1 Period from zero to six months

  • Complete survey of all use cases of artificial intelligence
  • Establishment and commencement of work by the Committee for Ethics in Artificial Intelligence,
  • Introduction of the assessment of the impact of artificial intelligence for new systems,
  • Start of training programmes for key roles.

21.2 Period from six to twelve months

  • Expansion of the assessment of existing systems,
  • Creation and introduction of uniform data and model maps,
  • Establishment of binding responsibility models,
  • First internal transparency reports.

21.3 Period of twelve to twenty-four months

  • Alignment of the management system with relevant standards,
  • Preparation for possible external audits,
  • Integration into sustainability reporting,
  • Regular review and further development of this AI Code of Ethics.

22. Roles and responsibility matrix

22.1 Application manager

Responsible for the technical management and target achievement of an artificial intelligence system.

22.2 Model manager

Responsible for data, training, evaluation and documentation of the model.

22.3 Data protection officer

Advises on and monitors all issues relating to data protection.

22.4 Information Security Management

Responsible for security concepts, threat analyses and coordination of security incidents.

22.5 Responsible Editor

Ensures compliance with editorial and media ethics standards in media offerings.

22.6 Service Manager

Responsible for the technical operation, monitoring and maintenance of an artificial intelligence system.

22.7 Procurement manager

Evaluates third-party providers and drafts contracts with a view to security, data protection and compliance.

23. Checklists

23.1 Brief assessment of the impact of artificial intelligence

  • Is the purpose of the system clearly defined?
  • What is the legal basis for data processing?
  • Which groups are affected?
  • What are the risks in terms of law, ethics, security, bias and the environment?
  • What safeguards are planned?
  • How is human oversight organised?

23.2 Data release checklist

  • Is the data source legitimate and trustworthy?
  • Is the scope of the data minimised?
  • Are retention periods defined?
  • Are there appropriate access controls in place?
  • Are transfers to third countries correctly secured?

23.3 Checklist for release for commissioning

  • Are data cards and model cards complete?
  • Is there an assessment of the impact of artificial intelligence?
  • Have security and data protection measures been implemented and tested?
  • Is there a monitoring concept in place?
  • Has training been provided for the relevant roles?

24. Forms and templates

24.1 Model card

Contains, among other things:

  • Description of the model and its purpose
  • Types of data used,
  • training procedures,
  • Measured variables and target values,
  • known limitations and risks,
  • Responsible persons and contact information.

24.2 Data card

Contains, among other things:

  • Origin and licensing status of the data,
  • Quality characteristics,
  • representativeness,
  • known distortions,
  • Restrictions on use.

24.3 Incident report

Contains, among other things:

  • Description of the incident,
  • affected systems and persons,
  • Immediate measures,
  • cause analysis,
  • Long-term corrective measures.

25. Glossary and references

Artificial intelligence:

Systems that generate content, predictions, recommendations or decisions that were previously reserved for human intelligence.

High-risk system:

Artificial intelligence system with a significant impact on the safety, health, fundamental rights or living conditions of affected persons.

Assessment of the impact of artificial intelligence:

Structured process for analysing the legal, ethical, safety-related, fairness-related and environmental impacts of an artificial intelligence system.

Human oversight:

The planned and empowered role of individuals who monitor, understand, question and, if necessary, intervene to correct artificial intelligence systems.

Adversarial testing:

Targeted simulation of attacks and abuse scenarios to identify and remedy vulnerabilities in artificial intelligence systems.

Accountability model:

Model for the clear allocation of responsibilities, accountability, advisory roles and information obligations.

Key references:

  • European Union Regulation on Artificial Intelligence,
  • European Union General Data Protection Regulation,
  • European Union Digital Services Act,
  • Organisation for Economic Co-operation and Development Guiding Principles for Trustworthy Artificial Intelligence,
  • Framework of the National Institute of Standards and Technology of the United States of America for the management of risks in artificial intelligence,
  • Relevant international standards on management systems for artificial intelligence,
  • SCANDIC FINANCE GROUP LIMITED's internal guidelines on data protection, digital services, human rights and supply chains, corporate governance, sustainability and combating modern forms of slavery.

Final provision:

This AI Ethics Code is an integral part of SCANDIC FINANCE GROUP LIMITED's compliance framework. Violations may result in labour, civil and criminal consequences. Management is expressly committed to its implementation, ongoing development and effective application in all business areas worldwide.

Drafted, signed and approved:

The Board of Directors of SCANDIC FINANCE GROUP LIMITED

Hong Kong, SAR-PRC, 1 January 2026