Artificial Intelligence as a Service (AIaaS) is transforming the way businesses and individuals access and leverage AI capabilities. AIaaS providers offer a range of services, from pre-built models to customized machine learning solutions, delivered through cloud platforms.
However, the adoption of AIaaS raises complex legal and ethical questions, particularly in the context of European data protection and intermediary liability law.
This article delves into the legal responsibilities and liabilities associated with AIaaS. It focuses primarily on the roles and obligations of AIaaS providers, addressing critical issues such as data protection under the General Data Protection Regulation (GDPR) and the challenges posed by the dynamic and interconnected nature of AIaaS. Understanding the legal landscape is crucial for both providers and users of AIaaS, as it impacts data privacy, security, and compliance within this rapidly evolving field.
Legal Responsibilities and Liabilities
In this chapter, we delve into the legal issues arising from AI as a Service (AIaaS) in the context of European data protection and intermediary liability law. Our primary focus will be on understanding the roles, responsibilities, and potential liabilities of AIaaS providers, with an emphasis on how the use of customer input data for training AI models affects these aspects.
We also explore the validity of supplementary processing under the General Data Protection Regulation (GDPR) and examine how the dynamic and interconnected nature of AIaaS may impact providers’ protection from liability for illegal activities involving their services.
Throughout this discussion, we emphasize that many legal challenges faced by AIaaS providers stem from their own activities, particularly those related to supplementary processing, and the fact that, when services are offered generically on a turn-key basis, providers often have limited knowledge of how customers use their services. Additionally, we raise questions about whether existing legal frameworks are suitable for the intricate, networked, and ever-evolving relationships inherent in AIaaS.
GDPR (General Data Protection Regulation) provides a comprehensive framework governing the processing of personal data. Personal data is broadly defined as any information related to an identified or identifiable natural person. Within GDPR, entities involved in processing personal data are categorized as either data controllers or data processors. Processing encompasses various operations performed on personal data, including collection, storage, transmission, and analysis. All processing of personal data covered by GDPR must be based on one of the lawful grounds specified within the regulation.
Determining whether data qualifies as personal data depends on two key questions:
(1) Does the data relate to a natural person?
(2) Is the individual identifiable from that data?
In many instances, data processed within the AIaaS chain will clearly relate to a natural person.
The identifiability of the person is a crucial factor in this determination.
Identifiability, as defined by GDPR, means that a person can be identified, directly or indirectly, from the data. GDPR does not require that the data alone must enable identification, nor must all the information needed for identification be held by a single entity. GDPR acknowledges scenarios where controllers process personal data without being able to identify the data subjects themselves. For example, if the purposes of processing do not necessitate the identification of data subjects, controllers are not obliged to acquire additional identifying information solely for GDPR compliance.
The core question regarding identifiability revolves around whether a person can be distinguished from others using the data, either through that data alone or in conjunction with other data. This determination should consider all reasonable means that the controller or another party could use to identify the person, whether directly or indirectly.
Factors such as the costs, time, available technology, and potential technological developments should be considered. Importantly, the CJEU (Court of Justice of the European Union) has established that data does not need to enable immediate identification to be classified as personal data. Additionally, GDPR contemplates situations where controllers may process personal data without being able to identify the data subjects, provided that such identification is neither prohibited by law nor practically impossible.
Many AI services inherently involve personal data, such as audience analytics, voice recognition, speech transcription, facial recognition, and the processing of sound, images, and text. Customers using AIaaS are generally aware of whether they process personal data, especially when their applications directly interact with or monitor third parties.
However, AIaaS providers may have limited knowledge of the specific data being processed due to the generic nature of the services they offer. This lack of knowledge arises because AIaaS providers often serve numerous customers, each with their unique use cases, and may not conduct extensive checks on how each customer uses their services, especially in cases where personal or special category data is not inherent in the service.
In practice, to avoid unintentionally processing personal data unlawfully, AIaaS customers and providers should apply GDPR’s standards to all processing within the AIaaS chain, regardless of whether it involves personal data. This precaution is particularly important given that GDPR applies irrespective of whether processing also includes non-personal data, making no distinction between the two. Without safeguards, AIaaS providers risk inadvertently processing personal data in violation of GDPR, especially if they assume that certain data is not personal data.
While GDPR’s requirements apply to all stages of the AIaaS processing chain, several issues are especially relevant for AIaaS. The complexity of interactions among third parties, customers, and providers makes it challenging to determine the roles of data controller and data processor accurately.
The dynamic, interconnected nature of AIaaS also poses difficulties for providers in managing data protection responsibilities and complying with legal bases for processing. As a result, we provide a high-level analysis of data protection issues arising from common provider practices to illustrate the general legal context for AIaaS, particularly when offered as a model-based service on a turn-key basis.
Our focus is on the legal position of providers, evaluating their real-world practices against GDPR and CJEU case law, rather than delving into specific use cases. However, we acknowledge that AI services offered on a consultancy basis or tailored exclusively for specific customers may involve arrangements that deviate from the scenarios described herein.
In the subsequent sections, we will explore key aspects of data protection law that are relevant to AIaaS providers, including the assignment of roles, the legal basis for processing, and the challenges posed by the intricate and evolving nature of AIaaS.
Controllers and Processors
In the realm of AIaaS and its complex processing chain, understanding the roles of controllers and processors under the General Data Protection Regulation (GDPR) is paramount. GDPR imposes distinct compliance obligations on these two categories, and whether an entity assumes the role of a controller or processor can have significant implications for data protection responsibilities.
Controllers: The bulk of GDPR’s compliance obligations primarily fall upon data controllers. Controllers are the entities responsible for determining the why and how of personal data processing. They are required to adhere to data protection principles such as lawfulness, fairness, and transparency; purpose limitation; data minimization; accuracy; storage limitation; integrity and confidentiality; and accountability. Controllers must establish a valid legal basis for processing personal data, implement technical and organizational measures to ensure GDPR compliance, and be able to demonstrate that compliance.
Processors: Processors, on the other hand, are entities that carry out personal data processing activities on behalf of controllers, following the controller’s instructions. Processors have specific GDPR obligations, including ensuring data security, cooperating with controllers, and only processing data as instructed by the controller.
As of August 2020, major AIaaS providers like AWS, Microsoft Azure, and Google Cloud generally define themselves as processors in their service agreements, with customers typically considered controllers. However, contractual definitions may not definitively determine these roles, as the factual circumstances of data processing also influence them.
Several key CJEU (Court of Justice of the European Union) cases have shed light on assigning controllership, highlighting important points:
- The controller concept should be interpreted broadly for comprehensive data subject protection.
- Entities that influence processing for their own purposes or commercial interests are considered controllers, even if they don’t directly control the data.
- Access to personal data is not a prerequisite for controllership.
- Joint controllership does not absolve any controller from GDPR obligations, and using another entity’s platform does not exempt a controller from compliance.
- Joint controllership does not necessarily mean joint responsibility for all processing stages; different actors may be controllers for specific processing stages.
In the context of AIaaS, understanding whether one is a controller or processor is critical, as it determines the extent of GDPR compliance responsibilities and obligations. Providers and customers must carefully assess their roles and responsibilities within the AIaaS processing chain to ensure adherence to data protection regulations.
Figure. Role assignments in the AIaaS chain where providers do not influence the processing of input or output data for their own purposes. Here, customers are controllers for all stages and likely also for any subsequent processing that they perform. AIaaS providers act as processors for their customers in the transfer, analysis, and return stages.
The delineation of roles as controllers or processors in the context of AIaaS can be complex, with significant implications for data protection responsibilities under GDPR. Here, we unravel the complexities of these roles:
Customers as Controllers: In most cases, customers of AIaaS services will invariably assume the role of controllers throughout the entire AIaaS processing chain. They are the ones who define the purposes and methods of data processing. For example, when customers offer AI-driven functionalities to third parties, they delegate the processing, including its purposes and methods, to the AIaaS provider. This applies regardless of how data is transferred to the provider, even when it bypasses the customer’s equipment. In these scenarios, the customer’s application dictates how data is processed and, consequently, defines the purposes and means of the AIaaS processing.
Providers as Processors: Determining the roles of AIaaS providers is more nuanced, especially considering their supplementary processing activities. If an AIaaS provider solely analyzes input data, returns outputs to customers, and engages in ancillary processing necessary for providing the service (e.g., billing, security, compliance), they typically act as processors. In this case, the customer retains control over the purposes of processing. The provider’s activities are performed as directed by the customer, and their primary role is to facilitate the requested service.
Providers as Controllers: However, if the provider undertakes any processing for their own purposes, independent of the core AIaaS service, the scenario changes. This supplementary processing, which operates alongside the AIaaS chain, introduces complexities. Providers would then influence the purposes of processing, as these purposes now extend beyond the customer’s intent. The provider’s supplementary processing activities become intertwined with the analysis and transfer stages of the AIaaS chain, as they serve both the provider’s supplementary purposes and the primary service to the customer. This dual purpose effectively means that providers also determine the means of processing at these stages, as they dictate how the customer’s data is utilized for their own supplementary processing.
Consequently, when AIaaS providers engage in supplementary processing, they may be considered controllers for the transfer and analysis stages of the AIaaS processing chain. They influence not only the why but also the how of data processing at these stages. Nevertheless, this controllership does not extend beyond these stages, as the return of output data to customers, and its subsequent use, does not serve the purpose of facilitating the provider’s supplementary processing. In the return stage, providers typically act as processors, as they execute the customer’s instructions.
Notably, this legal assignment of roles in AIaaS can differ from the traditional cloud services model, primarily due to the supplementary processing activities of AIaaS providers themselves. Notably, leading AIaaS providers such as AWS, Microsoft Azure, and Google Cloud engage in supplementary processing for various AI services. This complex interplay of roles underscores the evolving and intricate nature of data protection responsibilities in the AIaaS landscape.
The Relationship Between Controllers
Figure Role assignment in the AIaaS chain where providers do influence the processing of customer data for their own purposes (i.e. to facilitate supplementary processing, here using input data and model outputs). Customers are controllers for the transfer and analysis stages, and are also controllers (either solely or jointly with others) for the collection stage and likely also for any subsequent processing that they perform. Providers are joint controllers with customers for the transfer and analysis stages and processors for the return stage.
When both customers and AIaaS providers assume the role of controllers at various stages of the AIaaS processing chain, the question arises:
Are they joint controllers or separate controllers for the same processing?
GDPR defines joint controllers as entities that jointly determine the purposes and means of processing. However, in complex, multiparty environments like AIaaS, various forms of ‘pluralistic control’ can emerge, where controllers may pursue their own purposes to varying degrees.
The European Data Protection Board (EDPB) emphasizes that “jointly” should imply “together with” or “not alone,” depending on the specific arrangements between controllers.
According to the EDPB, joint controllership can arise when controllers make a ‘common decision’ about purposes and means or when they have ‘converging decisions’ regarding the same purposes and means. Converging decisions often occur when the processing serves both parties’ commercial or economic interests, and the processing would not be possible without the participation of both controllers.
In the context of AIaaS, the service agreements of major providers like AWS, Microsoft Azure, and Google Cloud typically specify that customer data may be used for supplementary processing, with varying options for opting in or out. Customers implicitly or explicitly agree to these terms when using AI services. Unless customers opt out of supplementary processing, their usage agreement implies consent for the provider to use their input data to facilitate supplementary processing. In such cases, the purposes of both the customer’s and the provider’s processing converge: providing the AI service to the customer and facilitating the provider’s supplementary processing.
As a result, customers and providers become joint controllers for the transfer and potentially the analysis stages of the AIaaS processing chain when supplementary processing takes place. However, if providers engage in supplementary processing even after customers have opted out, the provider becomes a separate controller for those stages. The customer’s and provider’s purposes do not align, and they do not jointly determine the purposes and means.
The distinction between joint and separate controllers is significant because joint controllers share responsibilities for various aspects of processing and must establish formal arrangements to divide those responsibilities, including data subject rights. At the time of writing, the service agreements of major AIaaS providers do not account for such a situation.
GDPR itself does not specify specific enforcement powers related to joint controllership, but supervisory authorities have corrective powers at their disposal, including warnings, orders to comply, bans on processing, and significant fines. The exact consequences for joint controllers who fail to arrange their relationship properly remain somewhat unclear, as GDPR does not outline specific penalties in this context.
In this intricate landscape of AIaaS, understanding and addressing the nuances of joint controllership is vital to ensure compliance with data protection regulations and the protection of individuals’ rights and data privacy.
Controllership for Supplementary Processing
In our analysis, we establish that providers who engage in supplementary processing are joint controllers with customers for the transfer and analysis stages of the AIaaS processing chain. However, we argue that providers are sole controllers for the supplementary processing itself.
It’s crucial to differentiate between (a) processing of customer data carried out as part of the AIaaS chain to provide the AI service and facilitate supplementary processing and (b) processing of customer data separately by the provider as part of their supplementary processing. Supplementary processing constitutes its own distinct processing chain, exclusively serving the provider’s purposes.
In this context, the customer cannot be considered a controller for the separate supplementary processing chain because they do not influence its purposes or means. Therefore, the provider is the sole data controller for this supplementary processing.
Legal Bases for Processing
Controllers must have a valid legal basis for processing personal data, and this requirement extends throughout the entire AIaaS chain. For ‘ordinary’ personal data, GDPR provides six bases for processing, including consent, contract performance, and legitimate interests of the controller. However, processing special category data faces more stringent restrictions, with only a few exemptions. In most AIaaS circumstances, obtaining explicit consent from data subjects for the purposes of the AIaaS processing chain is necessary.
Given the challenges in distinguishing between different data categories and the strict requirements for processing special category data, providers may choose to treat all input data as special category data, taking a precautionary approach. Providers engaging in supplementary processing may not need their own legal basis for processing if they do not perform supplementary processing. In such cases, they are processors acting under the instruction of their controller customers.
However, when special category data is processed within the AIaaS chain, customers and providers acting as controllers typically need explicit consent from data subjects for the purposes of the processing. This consent should be obtained for each purpose for which the AIaaS processing chain is executed, which includes both providing the service to customers and facilitating providers’ supplementary processing.
The challenge lies in obtaining explicit consent, especially when customers do not interact directly with passive third parties, such as in surveillance scenarios. Providers, serving as joint controllers for the transfer and analysis stages with limited third-party interaction, may rely on customers’ assurances regarding explicit consent. However, this approach carries the risk of unlawful processing if customers fail to obtain valid consent.
For providers’ supplementary processing, the situation becomes more complex. Providers may not always know whether customer data involves special category data, making reliance on the substantial public interest exemption challenging. Therefore, providers should, in practice, rely on the explicit consent basis for this processing.
However, providers face obstacles in obtaining explicit consent themselves since customers do not always have the information required for specific consent from data subjects. Moreover, data subjects must have the option to refuse consent without detriment, which is not always feasible in AIaaS.
Currently, none of the major AIaaS providers’ service agreements contain provisions regarding consent evidence, which suggests that supplementary processing involving special category data may be conducted unlawfully. Consequently, AIaaS providers acting as sole controllers for supplementary processing might unintentionally violate GDPR provisions and face potential enforcement actions, including fines, bans on processing, and legal claims for compensation.
The complex landscape of AIaaS and data processing requires careful consideration of legal bases for processing, particularly when special category data is involved, to ensure compliance with data protection regulations and safeguard data subjects’ rights and privacy.
The provision of AI services, often offered in a ‘turn-key’ fashion, poses potential challenges when it comes to intermediary liability. These challenges stem from the fact that AI services are provided without extensive checks on customer identities or their intended uses. This opens up the possibility for misuse of AI services to support various forms of illegal activities, such as financial crimes, fraud, harassment, intellectual property infringements, or tort liability resulting from the misuse of these services. While customers typically bear direct liability for any illegal use of AI services, the question arises as to whether providers may also face some level of liability for enabling, facilitating, or even directly engaging in such illegal activities.
The EU’s E-Commerce Directive offers certain protections from liability for providers of information society services.
These protections are potentially available for three specific activities:
(i) acting as a ‘mere conduit’ (transmitting information between individuals without selecting or modifying the content),
(ii) ‘caching’ (a technical activity associated with acting as a conduit), and
(iii) ‘hosting’ (storing information provided by recipients of the service).
Whether AIaaS providers are eligible for protection hinges on whether AIaaS qualifies as an information society service and whether the providers’ activities fall into one of these categories. However, the application of these protections is subject to specific conditions.
AIaaS undeniably falls under the category of information society services. It is provided for remuneration, over electronic means at a distance, and at the individual request of recipients, making AIaaS providers subject to the E-Commerce Directive.
However, AIaaS providers do not fit the criteria for acting as mere conduits, caching, or hosting under the directive. These protections are designed for services that transmit or store information in a neutral, passive, and technical manner, without modifying or actively engaging with the content. In contrast, AIaaS providers actively analyze customer input data using their algorithms, generate new information, and return it to customers. This activity goes beyond the mere transmission or storage of data and involves active processing and modification, disqualifying AIaaS providers from these protections.
Even if one were to argue that AIaaS providers qualify as hosts, they still operate beyond the scope of the directive’s protections. The directive requires service providers to act as intermediary service providers, engaging in activities of a “mere technical, automatic, and passive nature” without having knowledge of or control over the information they transmit or store. AIaaS providers actively analyze data, exercising control over the information processed and generated during the service. This control, even without actual knowledge of specific content, disqualifies them from the intermediary protections.
In summary, AIaaS providers are not protected from liability for illegal activities conducted using their services. Whether considered as hosts or not, the active nature of AIaaS providers’ involvement in processing and generating information for customers places them outside the liability shields provided by the E-Commerce Directive. This lack of protection leaves AIaaS providers in a position of uncertainty regarding potential liabilities arising from the illegal activities of their customers. While providers may not be directly liable for customer misconduct, the fact that AIaaS enables, facilitates, and underpins various application functionalities means providers may face complex questions about their involvement in such activities.
Challenges for Existing Law
The issues surrounding AIaaS providers, controllership, and liability highlight the limitations of existing legal frameworks that originated in a different era of data processing. These frameworks were designed for a more linear and less complex understanding of data processing, and they are struggling to adapt to the dynamic, networked, and complex nature of contemporary processing architectures and relationships.
- Outdated Legal Concepts: The concepts of data controller and processor were introduced into EU law at a time when data processing was more straightforward. These concepts were developed under the Data Protection Directive and maintained in GDPR, but they may not effectively address the complexities of modern data processing relationships.
- Complex Networked Environments: The current legal framework does not easily accommodate the intricate, networked relationships that arise in contemporary data processing environments. With the potential for multiple entities to act as controllers or processors at different stages of AIaaS, assigning responsibilities and understanding data processing relationships becomes challenging.
- Lack of Clarity: In many cases, it is difficult to clearly define who the controllers are and what their respective roles and responsibilities entail. The legal landscape becomes even murkier when third-party applications are involved, as they can interact with AIaaS providers, further complicating the assignment of responsibilities.
- Influence vs. Control: The traditional understanding of controller-processor relationships does not accurately reflect the power dynamics in many modern data processing scenarios. AIaaS providers often have significant influence over the means and purposes of processing, even though they are technically considered processors. The distinction between influence and control remains unclear in the legal framework.
- Dominance of Major Providers: Large AIaaS providers, such as Amazon, Google, and Microsoft, hold considerable influence over the AI services market. Their policies and decisions can significantly impact the purposes and means for which AIaaS can be used. Smaller organizations may find themselves locked into these providers’ ecosystems, limiting their choices and influence.
- Complex Application of Data Protection Laws: The application of data protection laws becomes increasingly complex in AIaaS environments. Customers are often left with limited insight into how AI models are engineered and how they impact their applications. This lack of transparency raises questions about data subjects’ rights and supervisory authorities’ oversight capabilities.
- A Need for Legal Reform: Considering the challenges posed by AIaaS, there is a need to rethink data protection laws. The law could adopt a more inclusive approach, where AIaaS providers are always considered controllers due to their active role in determining the purposes and means of processing. Alternatively, sector-specific frameworks tailored to the unique dynamics of AIaaS may be needed.
- Liability Protections: The existing liability protections under the E-Commerce Directive may need reevaluation in the context of AIaaS. AIaaS providers play an active role in enabling and facilitating customer applications, which may warrant greater legal scrutiny. Providers could be required to take proactive steps to mitigate the risk of illegal activities by customers, such as conducting background checks and vetting applications.
In summary, existing legal frameworks struggle to adapt to the complexities of AIaaS and modern data processing relationships. Legal reform and the development of more nuanced legal concepts may be necessary to effectively address the challenges posed by AIaaS providers, controllership, and liability in the evolving landscape of data processing.
Considerations for Legal Reform
While addressing the immediate challenges with data protection law and liability protection for providers is essential, policymakers and regulators must also consider broader issues arising from AIaaS. The emergence of AIaaS as a core component of the digital infrastructure introduces new complexities and concerns that require attention beyond data protection and intermediary liability. In this chapter, we explore these wider implications of AIaaS, recognizing that it goes beyond being a neutral tool and can have significant societal and regulatory effects.
The Normative Nature of AI Systems
AI technologies are not neutral tools but are inherently contextual and contingent. They reflect and encode the values, priorities, and assumptions of their creators, both in terms of design and functionality. AI systems establish boundaries and norms, shaping behavior and potentially influencing societal power dynamics. Recognizing the normative nature of AI is crucial for understanding its impact on society and for effective regulation.
Private Ordering and Regulatory Effects
AIaaS providers wield significant power by enabling, facilitating, and underpinning functionality in customer applications. They have the ability to engage in private ordering, effectively regulating the behavior of individuals and organizations using their services. As AIaaS becomes a critical part of societal infrastructure, the lack of independent, public, and accountable regulation and oversight becomes a pressing concern.
Amplification of Ethical Problems
AIaaS has the potential to amplify existing ethical concerns associated with AI due to the scale at which it operates. As AI services are provided by a few dominant providers and widely adopted, ethical issues such as bias, discrimination, and privacy violations can affect a larger and more diverse population. Policymakers need to consider how to address these amplified ethical challenges.
Training Data for AIaaS Models
The data used to train AIaaS models can raise ethical and privacy issues. AI models rely on vast datasets, and the source and quality of this data are critical. Policymakers should explore regulations and standards for data collection, ensuring fairness, accuracy, and privacy protection, especially when AIaaS models are used in sensitive applications.
AIaaS can facilitate the growth of AI-augmented surveillance, raising concerns about privacy and civil liberties. Policymakers should develop regulations to ensure that AI surveillance is conducted within legal and ethical boundaries, with a focus on transparency, accountability, and safeguards against abuse.
Potential for Misuse and Abuse
AI services provided by AIaaS can be misused or abused for various purposes, including cyberattacks, misinformation campaigns, and deepfakes. Policymakers must consider mechanisms to mitigate these risks and hold both providers and users accountable for any illicit activities.
Transnational Governance Challenges
AI services operate across borders, making governance and regulation complex. Policymakers need to collaborate at the international level to develop effective frameworks that address the challenges posed by AIaaS while respecting the principles of sovereignty and jurisdiction.
Opaque Processing Chains
AIaaS providers often operate with limited transparency regarding their processing chains and socio-technical processes. While transparency is a valuable goal, it alone cannot address the legal and societal problems highlighted. Policymakers should focus on interventions that directly tackle the identified issues.
In conclusion, the rise of AIaaS introduces profound societal, ethical, and regulatory challenges. Policymakers and regulators must take a holistic approach to address these challenges, acknowledging the normative nature of AI, the power dynamics involved, and the need for effective governance and regulation in an increasingly interconnected and AI-driven world.
Ethical Concerns and AIaaS
Ethical concerns surrounding AI have garnered significant attention in recent years. However, much of the discourse has implicitly assumed that AI development would primarily occur in-house within companies and organizations. What has been less discussed is the transformative impact of AI as a service (AIaaS). This chapter delves into the ethical considerations that arise from the widespread adoption of AIaaS and how it amplifies existing ethical problems associated with AI.
Amplification of Bias and Discrimination
One of the most pervasive ethical concerns in AI is the potential for biases against different population groups. Bias can creep into AI systems in various ways, such as through biased training data or the prejudices of system designers. AIaaS providers, due to their scale and wide-reaching customer base, have the potential to magnify these problems.
AIaaS providers offer their services to numerous customers engaged in diverse pursuits. If biases exist in the provider’s AI systems, they can propagate across a wide array of applications and domains, potentially resulting in discriminatory outcomes. For instance, if a provider’s system exhibits biases against recognizing specific groups of individuals, this bias could affect a multitude of customer applications, leading to possible legal repercussions.
The scale and portability of AIaaS models pose particular challenges in addressing bias. Problems may arise in specific application contexts, which providers, offering generic models without knowledge of individual deployments, are unlikely to anticipate. This “portability trap” means that models used in one context can inadvertently cause harm when deployed in another.
Moreover, addressing biases in AI models involves navigating various fairness and bias mitigation techniques, some of which may conflict with one another. Providers select and implement these measures based on their priorities and values, which can lead to different manifestations of bias in various application contexts.
While ethical principles for AI have been proposed to tackle these issues, they are voluntary in nature and rely on market forces for enforcement. Commercial interests alone may not ensure that AIaaS providers take sufficient steps to mitigate the risk of biases across their customer base.
Role of Legal and Regulatory Intervention
While ethical principles and non-legal frameworks can complement legal and regulatory standards for AI, they cannot substitute for them. The primary limitation of ethical principles is their voluntary nature, leaving enforcement to market dynamics. While it may be in providers’ best interests to address ethical concerns to avoid negative publicity, this is not always sufficient to ensure adequate mitigation of risks.
Crucially, allowing private companies to make decisions on critical public issues, such as the use of facial recognition technology, raises fundamental questions. The absence of enforceable legal and regulatory frameworks in AIaaS essentially empowers providers to act as their own regulators, often prioritizing commercial considerations over other societal values.
Potential for Legal and Regulatory Intervention
Given the scale and infrastructure-level role of AIaaS in future technology landscapes, it offers potential points for legal and regulatory intervention. Regulating AI at this infrastructural level could be an effective means of addressing the ethical challenges associated with its widespread adoption. This would enable policymakers to ensure that AI is developed and deployed in ways that align with societal values and priorities, rather than leaving these decisions solely in the hands of private companies.
In conclusion, AIaaS has the potential to amplify ethical concerns related to bias, discrimination, and fairness. While ethical principles have a role to play, they cannot replace the need for legal and regulatory frameworks that address the societal impact of AIaaS. Policymakers must recognize the transformative nature of AIaaS and enact regulations that promote ethical AI development and deployment on a broad scale.
Model Training Data
Two significant categories of policy problems emerge concerning the data used to train AI models within AIaaS: (1) issues arising from training data sourced from customers and (2) issues related to training data acquired from external sources. Each of these categories raises unique considerations.
Training Data from Customers
As discussed earlier, AIaaS providers engage in supplementary processing, which may involve using customer data to enhance their AI models. Often, this customer data includes information collected from third parties, such as end-users of applications or subjects of surveillance, which customers input into the AIaaS platform. Utilizing this data for training AI models presents several privacy concerns.
First, there are privacy concerns associated with providers using third-party data to train their models without the knowledge or consent of those third parties. AIaaS often operates discreetly in the background of applications, leaving end-users unaware of the data flows and processing activities supporting the application’s functionality. In cases where third-party data, representing various individuals, is used without their knowledge, it constitutes a significant privacy violation. Additionally, the use of third-party data in training models may involve the processing of special category data without obtaining explicit consent, potentially violating data protection laws.
Second, there is a potential risk related to “model inversion attacks,” which can allow for the analysis of inputs and outputs of an AI system to extract information about the model’s training data. In some instances, this extracted information may contain personal data, potentially revealing details about individuals. This risk is exacerbated when third-party data contributed by multiple customers, which may represent numerous data subjects, is present in the training data.
Third, access to customers’ data and insights into the real-world deployment of AI models provides providers with a competitive advantage in developing more sophisticated systems. AIaaS providers can leverage the large volume of customer data and varied deployments to refine their models and enhance their functionality. This advantage may lead to the development of more accurate and adaptable systems at a lower net cost. Furthermore, the concentration of AI services around a few dominant companies with access to extensive customer data can contribute to their technical superiority, potentially expanding their influence into other sectors and reinforcing their platform power.
Given the questionable legality of processing special category data, the broader privacy concerns surrounding the use of third-party data, and the potential contribution to platform power and monopolization, there is a compelling case for considering prohibitions on AIaaS providers using customer data for training or improving models and systems. While such prohibitions may prompt providers to seek training data from other sources, other regulatory mechanisms, such as GDPR’s restrictions on data repurposing, could also address this issue when properly enforced.
Training Data from External Sources
In addition to customer data, AIaaS providers heavily depend on extensive data and technical supply chains that provide the necessary training data, often aggregated, cleaned, and labeled. While some sources of this data are uncontroversial, others raise significant concerns.
Unproblematic Sources: Some sources of training data used by AIaaS providers are unproblematic. For example, Google’s reCAPTCHA leverages user verification of everyday objects in images, essentially allowing users to label training data for Google’s image recognition algorithms. While this benefits Google, it is a voluntary action on the part of users.
Controversial Practices: However, some practices are considerably more contentious. Certain AI service providers have scraped websites to obtain training data, which raises ethical and legal questions. Notably, Clearview AI, a controversial provider, scraped user images from major social media platforms for their training datasets. Clearview AI’s facial recognition service has been adopted by law enforcement agencies and commercial entities globally.
Cross-Border Data Flows: AIaaS providers often engage in cross-border data flows, which enable concerning practices in their supply chains. Firstly, providers may source training data from countries with minimal or absent data protection or privacy regulations and use this data to develop systems for use in jurisdictions with robust data protection laws. Secondly, providers frequently contract low-paid, precarious workers in countries with lower labor standards to label and clean data used in developing AI services. This practice is analogous to supply chain outsourcing in physical product manufacturing to evade workers’ rights, environmental protections, and minimum wage laws.
The cross-border nature of these technical supply chains allows AIaaS providers to outsource crucial work underpinning AI services to other jurisdictions, potentially escaping privacy, data protection, employment, and other laws. Any future regulatory initiatives pertaining to AIaaS should consider whether AI systems offered as a service within the EU should be trained on data obtained, labeled, cleaned, and processed in compliance with European data protection, employment, and other relevant laws.
One critical policy issue arising from AIaaS is the potential for AI-augmented surveillance. Surveillance, whether conducted by public or private entities, has gradually extended into various aspects of modern life through internet-enabled technologies. AIaaS, in essence, enables what can be described as “Surveillance as a Service.” This innovation is particularly significant for those who lack the technical expertise or resources to develop surveillance systems independently.
The increasing presence of cameras, microphones, and various sensors in physical spaces has led to the collection of data from and about individuals, often without their knowledge or awareness of how this data will be utilized. AI services can assist in monitoring and analyzing people’s behavior in these spaces. For example, retailers can use AI to track customers in stores and analyze their behavior patterns. Facial recognition and other biometric services can enable surveillance in public spaces, potentially identifying and monitoring individuals who would otherwise remain anonymous. Speech and voice recognition services can similarly be employed to monitor private conversations and identify individuals by their voice.
The rapid deployment of AI-augmented surveillance systems, made cost-effective and scalable by AIaaS, can fundamentally transform both physical and virtual spaces. It can also alter the dynamics of power, control, and privacy between those conducting surveillance and those being observed.
AI capabilities empower those conducting surveillance to shift from passive observation to actively analyzing and influencing people’s behavior or subjecting them to intervention or detention. Moreover, the inherent biases and errors in machine learning systems pose a significant risk of exacerbating societal divisions and hierarchies based on gender, race, and ethnicity.
As a result, the introduction of AI into video surveillance and digital information-gathering infrastructure may disrupt power balances in favor of those controlling these formerly “dumb” systems, necessitating a reassessment of how to maintain an appropriate balance between societal interests, fundamental rights, and the expansion of digital infrastructure.
Misuse and Abuse
The scale at which customers can harness AIaaS presents significant potential for misuse, abuse, or undesirable uses of AI services, which can have serious consequences. To address these concerns, AIaaS providers typically outline in their service agreements that customers cannot engage in or promote illegal or unlawful activities. For instance, most providers explicitly prohibit activities like criminal behavior, fraud, intellectual property rights infringement, and defamation.
Providers also impose limitations on the purposes for which their services can be used. For instance, Amazon prohibits the use of its AWS services for harmful content, offensive material, security violations, and network abuse. Microsoft prohibits its services from being used to violate the rights of others, distribute malware, or engage in activities that could result in harm. Google similarly restricts the use of its services to prevent rights infringement, the distribution of malicious software, and spam.
While the absence of liability protection for AIaaS providers should incentivize them to proactively identify and prevent illegal use of their services, the primary motivators for providers to identify legal yet undesirable uses are typically commercial pressures and the risk of reputational damage. These factors have already led several providers to prohibit the use of their facial recognition services by law enforcement.
Identifying illegal or prohibited use of AIaaS can be a challenging task. However, several potential methods for identifying misuse or abuse exist. One effective policy change would involve ending the practice of offering AI services in a turn-key manner. Instead, providers could require consultation and vetting of customers and their applications, ensuring they adhere to well-defined and context-specific terms of service provisions. Providers could also implement mechanisms such as rate limiting, which restricts the frequency and volume of customers’ API requests. For instance, Amazon and Microsoft already employ similar methods for specific services, particularly those that process faces.
More advanced monitoring methods by providers to identify illegal activity or violations of terms of service are theoretically possible but may not always be practical. However, various methods of assessing AIaaS usage could be implemented, such as monitoring for suspicious usage patterns from metadata or examining customers’ inputs and outputs. These methods could assist providers in identifying cases warranting further investigation. However, such monitoring should always be balanced against potential privacy concerns, particularly when customer data is often third-party data, as systematic monitoring by providers may inadvertently reveal end-users’ behaviors and activities. Therefore, any monitoring should be justified by overriding public policy interests, necessary, and proportionate.
reference link :
GDPR – https://gdpr.eu/
Amazon, ‘AWS GDPR DATA PROCESSING ADDENDUM’, section 1.1 accessed 13 November 2020; Microsoft, ‘Online Services Data Protection
Addendum’ (July 2020), 7 https://www.microsoft.com/en-us/licensing/product-licensing/products accessed 13 November 2020 (see also Microsoft Azure, ‘Azure Cognitive Services’ accessed 13 November 2020); Google Cloud, ‘Data Processing and Security Terms (Customers)’, section 5.1 accessed 13 November 2020.
Microsoft Azure, ‘Azure Cognitive Services’ 29 March 2021.
Article 29 Data Protection Working Party, ‘Opinion 1/2010 on the concepts of “controller” and “processor”’
(2010) 00264/10/EN WP169, 10-11.
Article 29 WP (n 72), 14.
Case C-210/16 Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v. Wirtschaftsakademie Schleswig-Holstein GmbH 2018.
Case C-25/17 Tietosuojavaltuutettu v. Jehovan Todistajat  (‘Jehovah’s Witnesses’).
Case C-40/17 FashionID GmbH & Co. KG v Verbraucherzentrale NRW e.V. 2019.
Microsoft’s service agreement says that they will ‘comply with the obligations of an independent data controller under GDPR’ [emphasis added] in relation to some ancillary processing, though notably – unlike in some other areas of their service agreement – they do not claim to actually be a controller for that processing (Microsoft, ‘Online Services Data Protection Addendum’ (July 2020) p.7 accessed 13 November 2020).
European Data Protection Board (n 100), 18.
European Data Protection Supervisor ‘EDPS Guidelines on the concepts of controller, processor and joint
controllership under Regulation (EU) 2018/1725’ (2019), 24; see also Article 29 WP (n 70), 17-22.
René Mahieu, Joris van Hoboken, and Hadi Asghari, ‘Responsibility for Data Protection in a Networked
World: On the Queston of the Controller, “Effective and Complete Protection” and its Application to Data
Access Rights in Europe’ (2019) 10 Journal of Intellectual Property, Information Technology and E-Commerce Law 1.
European Data Protection Supervisor, ‘Assessing the necessity of measures that limit the fundamental right to the protection of personal data: A Toolkit’ (2017) accessed 13 November 2020.
Data Protection Act 2018 (‘DPA 2018’), sch 1, pt 2.
See table at Information Commissioner’s Office, ‘Guide to the General Data Protection Regulation: What are the substantial public interest conditions?’ accessed 13 November 2020
Information Commissioner’s Office, ‘Guide to the General Data Protection Regulation: What are the substantial public interest conditions?’ accessed 13 November 2020; Some private companies that wish to deploy live facial recognition systems have argued that they can rely on the ‘substantial public interest’ ground for preventing and detecting crime. We find this argument unconvincing. While preventing or detecting crime may be in the public interest, the deployment of facial recognition does not, in our view, meet the necessity test (unless we are to believe that there has until now been an epidemic of crime that only augmenting the already extensive use of CCTV with facial recognition systems deployed by a private company can possibly prevent). Nor do we agree that deploying facial recognition systems in a shopping centre, for example, is in the substantial interest of the public, rather than principally in the interests of the private owner of the shopping centre and their commercial tenants.
Google Cloud, ‘Terms of Service’ section 3.2 https://cloud.google.com/terms accessed 13 November 2020.
The UK’s Data Protection Act 2018, for instance, includes ‘equality of opportunity or treatment’ as a
condition for the substantial public interest exemption (DPA 2018, sch 1, pt 2, para 8).
Directive 98/34/EC of the European Parliament and of the Council of 22 June 1998 laying down a procedure for the provision of information in the field of technical standards and regulations  OJ L 204/37
(‘Technical Standards and Regulations Directive’) art 1 (as amended by Directive 98/48/EC of the European Parliament and of the Council of 20 July 1998 amending Directive 98/34/EC laying down a procedure for the provision of information in the field of technical standards and regulations  OJ L 217/18); see also E- Commerce Directive recital 18.
E-Commerce Directive art 2(d).
E-Commerce Directive art 2(b): “any natural or legal person providing an information society service”.
E-Commerce Directive art 12.
Jasper P Sluijs, Pierre Larouche, and Wolf Sauter (2012) ‘Cloud Computing in the EU Policy Sphere’ (2012) 3
Journal of Intellectual Property, Information Technology and e-Commerce Law 1.
Case C-324/09 L’Oréal SA and Others v eBay International AG and Others  (‘L’Oreal v eBay’) para 113; Cases C-236/08 Google France SARL and Google Inc. v Louis Vuitton, C-237/08 Google France SARL v Viaticum
SA and Luteciel SARL, and C-238/08 Google France SARL v Centre national de recherche en relations humaines (CNRRH) SARL and Others  (‘Google France and Google’) para 114.
E-Commerce Directive s 4; Google France and Google para 112.
E-Commerce Directive recital 42.
E-Commerce Directive recital 42; Google France and Google para 114; L’Oreal v eBay para 113.
Omer Tene, ‘Privacy Law’s Midlife Crisis: A Critical Assessment of the Second Wave of Global Privacy Laws’
(2013) 74 Ohio State Law Journal 1217.
Seda Gürses and Joris van Hoboken, ‘Privacy after the Agile Turn’ in Jules Polonetsky, Omer Tene, and Evan Selinger (eds) Cambridge Handbook of Consumer Privacy (Cambridge University Press 2017); Mahieu et al (n 119); Lilian Edwards, Michèle Finck, Michael Veale, Nicolo Zingales, ‘Data subjects as data controllers: a Fashion(able) concept?’  Internet Policy Review; Christopher Millard, Christopher Kuner, Fred H Cate, Orla Lynskey, Nora Ni Loideain, and Dan Jerker B Svantesson ‘At this rate, everyone will be a [joint] controller of personal data!’ (2019) 9 International Data Privacy Law 4.
See, for example, Jatinder Singh, Christopher Millard, Chris Reed, Jennifer Cobbe, and Jon Crowcroft,
‘Accountability in the IoT: Systems, Law, and Ways Forward’ (2018) 51 IEEE Computer 7.
Article 29 WP (n 70), 14-15.
European Data Protection Supervisor (n 100), 9.
European Data Protection Board, ‘Guidelines 07/2020 on the concepts of controller and processor in the GDPR’ (2020), 13-15
The UK’s Information Commissioner’s Office, for instance, suggests that where providers decide questions relating to model design and development they may be considered to be data controllers (Information Commissioner’s Office, ‘Guide to Data Protection: What are the accountability and governance implications of AI?’ accessed 13 November 2020.
BBC News, ‘IBM abandons ‘biased’ facial recognition tech’ (9 June 2020)
https://www.bbc.co.uk/news/technology-52978191 accessed 13 November 2020; Abner Li ‘Google Cloud won’t sell facial recognition tech yet, ‘working through’ policy questions’ (13 December 2018)
https://9to5google.com/2018/12/13/google-not-selling-face-recognition/ accessed 13 November 2020.
Seyyed Ahmad Javadi, Richard Cloete, Jennifer Cobbe, Michelle Seng Ah Lee and Jatinder Singh, ‘Monitoring Misuse for Accountable ‘Artificial Intelligence as a Service’ (2020) Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’20), ACM, New York, NY, USA, 2020
https://dl.acm.org/doi/10.1145/3375627.3375873 accessed 13 November 2020; Seyyed Ahmad Javadi, Chris Norval, Richard Cloete, and Jatinder Singh, ‘Monitoring AI Services for Misuse’ (2021) Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’21), ACM, Virtual Event, USA, 2021.
Services offered on a consultancy basis typically allow customers much greater latitude in defining their relationship with the provider and the service offered.
Javadi et al (n 186); these would be similar to some of the checks undertaken in, for instance, the banking and financial services industry
Engin Bozdag, ‘Bias in algorithmic filtering and personalisation’  Ethics in Information Technology 15, 209-227 https://link.springer.com/article/10.1007/s10676-013-9321-6 accessed 13 November 2020; Tarleton Gillespie, ‘The Relevance of Algorithms’ In Tarleton Gillespie, Pablo J Boczkowski, and Kirsten A Foot (eds), Media Technologies: Essays on Communication, Materiality, and Society (MIT Press 2014); Robin Hill, ‘What An Algorithm Is’ (2016) 29 Philosophy and Technology 35
https://link.springer.com/article/10.1007/s13347-014-0184-5 accessed 13 November 2020; David Beer, ‘The Social Power of Algorithms’ (2017) 20 Information, Communication & Society 1
http://eprints.whiterose.ac.uk/104026/1/Algorithms_editorial_final.pdf accessed 13 November 2020; Natascha Just and Michael Latzer, ‘Governance by algorithms: reality construction by algorithmic selection on the Internet’ (2017) 39 Media, Culture & Society 2, 238-258
https://journals.sagepub.com/doi/abs/10.1177/0163443716643157?journalCode=mcsa accessed 13 November 2020.
Rob Kitchin, ‘Thinking critically about and researching algorithms’ (2017) 20 Information, Communication & Society 1.
Lawrence Lessig, Code, and Other Laws of Cyberspace (1999)
Kitchin (n 198).
Sylvie Delacroix, ‘Beware of ‘Algorithmic Regulation’’ (2019) SSRN https://ssrn.com/abstract=3327191 accessed 13 November 2020.
Proposal for a Regulation of the European Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts (‘Proposed Artificial Intelligence Act’).
Proposed Artificial Intelligence Act arts 1-2.
Proposed Artificial Intelligence Act art 5.
For an overview of academic work on bias and fairness in ML, see Sahil Verma and Julia Rubin, ‘Fairness definitions explained’  FairWare ’18: Proceedings of the International Workshop on Software Fairness. 202 Harini Suresh and John V Guttag, ‘A Framework for Understanding Unintended Consequences of Machine Learning’  arXiv preprint, arXiv:1901.10002 https://arxiv.org/abs/1901.10002 accessed 29 March 2021.
Joy Buolamwini and Timnit Gebru, ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’  Proceedings of Machine Learning Research, 81.
Andrew D Selbst, Danah Boyd, Sorelle Friedler, Suresh Venkatasubramanian, and Janet Vertesi, ‘Fairness and Abstraction in Sociotechnical Systems’  2019 ACM Conference on Fairness, Accountability, and Transparency (FAT* 19) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3265913 accessed 29 March 2021.
For a review of some proposed sets of AI ethics principles, see Thilo Hagedorff, ‘The Ethics of AI Ethics: An Evaluation of Guidelines’  Minds & Machines 30, 99-120.
Elettra Bietti, ‘From Ethics Washing to Ethics Bashing: A View on Tech Ethics from Within Moral Philosophy’  FAT* ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency.
IBM has already ceased offering facial recognition services (BBC News (n 165)). Amazon and Microsoft have chosen not to offer facial recognition to law enforcement in the US (Isobel Asher Hamilton, ‘Outrage over police brutality has finally convinced Amazon, Microsoft, and IBM to rule out selling facial recognition tech to law enforcement. Here’s what’s going on’ Business Insider (13 June 2020
https://www.businessinsider.com/amazon-microsoft-ibm-halt-selling-facial-recognition-to-police-2020-6 accessed 13 November 2020).
Amazon Web Services (n 37).
Michael Veale, Reuben Binns, and Lilian Edwards (2018) ‘Algorithms that remember: model inversion
attacks and data protection law’ (2018) 376 Philosophical Transactions of the Royal Society A 2133.
Veale et al (n 210).
James O’Malley, ‘Captcha if you can: how you’ve been training AI for years without realising it’ TechRadar
(12 January 2018) accessed 13 November 2020.
Rachel Connolly, ‘Scraping Faces’ London Review of Books (28 January 2020)
https://www.lrb.co.uk/blog/2020/january/scraping-faces accessed 13 November 2020.
Kashmir Hill, ‘The Secretive Company That Might End Privacy as We Know It’ The New York Times (18 January 2020) https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html accessed 13 November 2020; Kate Cox, ‘Facebook, YouTube order Clearview to stop scraping them for faces to match’ ArsTechnica (7 February 2020) accessed 13 November 2020.
Ryan Mac, Caroline Haskins, and Logan McDonald, ‘Clearview’s Facial Recognition App Has Been Used By The Justice Department, ICE, Macy’s, Walmart, And The NBA’ Buzzfeed News (27 February 2020)
https://www.buzzfeednews.com/article/ryanmac/clearview-ai-fbi-ice-global-law-enforcement accessed 13
Kristina Irion, and Josephine Williams, ‘Prospective Policy Study on Artificial Intelligence and EU Trade Policy’ (2019) Amsterdam: The Institute for information Law.
Noopur Raval, ‘Automating Informality: On AI and Labour in the Global South’ (2019) Global Information Society Watch; Irion and Williams (n 217); Madhumita Murgia, ‘AI’s new workforce: the data-labelling industry
spreads globally’ Financial Times (24 July 2019) <https://www.ft.com/content/56dde36c-aa40-11e9-984c- fac8325aaa04> accessed 13 November 2020.
Through what Foucault described as the disciplinary power of panopticism (Michel Foucault, Discipline and Punish (trans Alan Sheridan, Penguin 1991)).
Bringing this kind of disciplinary power away from panopticism and closer to the kind of control through ‘universal modulation’ as described by Deleuze (Gilles Deleuze ‘Postscript on the Societies of Control’ (1992) 59 October, 3-7).
Michael Veale, ‘A Critical Take on the Policy Recommendations of the EU High-Level Expert Group on Artificial Intelligence’ (2019) UCL Working Paper Series, No. 8/2019 https://ssrn.com/abstract=3475449 accessed 13 November 2020, 5-6.
Amazon Web Services, ‘AWS Acceptable Use Policy’ https://aws.amazon.com/aup/ accessed 13 November 2020; Microsoft, ‘Online Services Terms’ accessed 13 November 2020; Google Cloud, ‘Google Cloud Acceptable Use Policy’
https://cloud.google.com/terms/aup accessed 13 November 2020.
Amazon Web Services (n 222).
Microsoft, ‘Online Services Terms’ (n 222).
Google Cloud (n 222).
Hamilton (n 208).