Artificial Intelligence (AI) has become a ubiquitous buzzword in recent years, with its applications spanning from personal digital assistants like Siri and Alexa to complex machine learning algorithms used in autonomous vehicles and medical diagnosis. AI, in its various forms, has the potential to revolutionize the way we live and work.
The Promise of AI
AI is often touted as the technology that will reshape industries, boost productivity, and drive economic growth. According to the McKinsey Global Institute, AI could contribute an additional $13 trillion to the global economy by 2030. This tremendous potential stems from AI’s ability to perform complex tasks that were once the exclusive domain of human experts.
From medical data analysis aiding doctors in diagnosis to the rapid analysis of vast amounts of video footage for criminal investigations, AI is demonstrating its prowess.
- Scarcity of AI Experts: There is a shortage of AI professionals with the expertise required to develop and deploy AI solutions effectively.
- Resource Intensiveness: Setting up and maintaining the extensive IT infrastructure needed for AI operations can be prohibitively expensive for many organizations.
- Lack of Know-How: Organizations often lack the knowledge and experience necessary to configure and deploy AI systems optimally.
These challenges have led to a situation where many organizations, particularly smaller ones, are hesitant to embrace AI and fully realize its potential.
AI as a Service
In the realm of technology, the term ‘AI’ often takes on many different meanings and interpretations. However, within the context of this discussion, we focus specifically on the utilization of AI in relation to machine learning (ML). Machine learning, often abbreviated as ‘ML,’ is a subset of AI that functions to uncover intricate patterns within datasets and subsequently construct and refine models that represent this data.
These models, in turn, can be harnessed to make classifications, predictions, decisions, and other valuable insights when presented with new data. To illustrate this concept further, consider a model designed for the recognition of specific objects in images. This model undergoes training using a dataset containing various images to statistically recognize distinctive characteristics of particular objects. Once trained, the model can be deployed to classify objects within new images.
It’s important to note that the outputs of these models are probabilistic in nature. Given that ML involves deriving statistical models based on training data, there is always a degree of error or uncertainty associated with both the representation and outputs of these models. The efficacy and functionality of an ML model are determined by a multitude of factors, including the quality and specificity of the training data, the methods used for data selection, cleaning, and preprocessing, the machine learning techniques, configurations, and parameters employed to build the statistical model, any post-processing techniques applied to correct or adjust model outputs, and more. This distinguishes machine learning from traditional software engineering, where outcomes are explicitly programmed.
AIaaS in Practice
AI as a Service, or AIaaS, is a subset of cloud services that encompasses two main components. Firstly, it involves providing technical environments and resources to empower customers in conducting their own machine learning endeavors, sometimes referred to as ‘Machine Learning as a Service.’ Secondly, it entails offering access to pre-built AI models that customers can seamlessly integrate into their applications. The spectrum of AIaaS offerings can vary, with some services being hybrids that incorporate elements of both approaches. However, this paper predominantly focuses on the second type, which is the most prominent form of AIaaS. Thus, we use “AIaaS” to denote commercial offerings that grant access to generalized pre-built ML models as a service.
Figure . A simplified AIaaS scenario. An image is sent via the customer’s application to the provider’s image recognition service. The service analyses the image (applying the model) and returns the detected objects.
The leading AIaaS providers are typically larger technology companies with the financial and technical capabilities required to develop and offer intricate machine learning systems. These providers include Amazon (through Amazon Web Services or ‘AWS’), Microsoft (through Microsoft Azure), Google (through Google Cloud), IBM (Watson), and some smaller providers like BigML. In this discussion, we will focus on the three largest and most influential providers—Amazon, Microsoft, and Google—as they dominate the market and offer services representative of those of other providers.
AI services exist within a broader ecosystem that encompasses various components, such as data brokers who supply data to both customers and providers, as well as training data labeling services like Mechanical Turk. These AI services typically provide generic capabilities that can be applied across a wide range of application contexts. Broadly speaking, commercial AIaaS providers offer four primary categories of service, although others also exist:
- Language: This category includes services like text sentiment analysis, translation, and knowledge base creation.
- Speech: It encompasses services such as speech transcription, speech synthesis, and voice recognition.
- Vision: This category covers both still images and videos, offering services like image analysis and classification, object recognition, and facial detection, analysis, or recognition.
- Analytics: It includes services related to web usage analysis, behavioral analysis, recommendations and personalization, content moderation, and anomaly detection.
These AI services are typically tightly integrated into a provider’s suite of cloud services, offering customers a wide array of tools to support their applications. For instance, if a customer utilizes AWS for hosting their application, they can seamlessly leverage Amazon’s AI services as part of their hosting package, thereby extending the functionality of their application. While customers can also utilize services from other providers to support their applications, this integration of AIaaS with other cloud services provides the scalability and ease of deployment that might otherwise be unattainable.
Similar to many other cloud services, AIaaS is readily available on demand. Providers offer standard form contracts, making it relatively easy and cost-effective for customers to configure and utilize the service. Some providers even offer AI services on a consultancy basis, working closely with customers to tailor services to their specific needs. This can involve customization of pre-built models to a significant extent or even complete customization for certain high-value customers.
Motivations for Considering AIaaS
AI as a Service (AIaaS) deserves a closer look for several compelling reasons.
Firstly, AI services play a distinct and crucial role in enabling specific functionalities within customer-defined applications. Unlike traditional cloud services that mainly support operational aspects like availability, storage, connectivity, scalability, and security, AI services directly underpin the core functionalities of customer applications.
In other words, AI services provide classifications, analyses, detections, predictions, and other capabilities upon which the customer’s application relies. The performance of the AI service’s model is intimately tied to the provider’s engineering processes, and thus, it significantly influences the functionalities embedded in the customer’s application.
Secondly, AI systems, while powerful, are not without their challenges. They can exhibit errors, biases, inequalities, and other issues. Through AIaaS, these problems have the potential to be reproduced at scale. Furthermore, the widespread availability of state-of-the-art AI capabilities through AIaaS, often with limited provider oversight, raises concerns about enabling undesirable, problematic, or even illegal applications. This raises important questions regarding the roles, responsibilities, and potential liabilities of both AIaaS providers and their customers in ensuring ethical and lawful use of AI technologies.
Thirdly, AIaaS is poised to grow in prominence. In-house machine learning endeavors can be prohibitively expensive and resource-intensive, requiring access to vast amounts of data, specialized expertise, and substantial computational power. By providing developers with the ability to ‘plug in’ pre-built machine learning capabilities into their applications, AIaaS significantly increases the likelihood that machine learning will become the backbone of a broader range of applications. In the future, many organizations seeking to harness AI may rely on AI services to integrate the desired functionality seamlessly.
Lastly, due in part to the substantial data, expertise, and computational resources required to develop sophisticated AI systems, it’s likely that dominant players in the digital economy will be the practical providers of AI services. As mentioned previously, Amazon, Microsoft, and Google have already emerged as leaders in the AIaaS sector, and they also hold dominant positions in other online service sectors. This consolidation around a few major providers has significant implications. Not only do these providers gain a potentially lucrative and monopolistic revenue source, but they also position themselves at the core of any societal transformation brought about by the widespread availability of AI technologies. This positions them as central players in shaping the future of society. The leading AIaaS providers happen to be three of the “Big Tech” companies and represent three of the four most valuable publicly traded companies globally by market capitalization. In recent years, these Big Tech companies have formed an oligopoly that wields significant power through their financial resources and dominance in online services, further solidified by their control of AIaaS infrastructure at both virtual and physical levels.
The AIaaS Processing Chain
Figure. Representation of the stages of the AIaaS processing chain. First, the customer collects input data from one or more sources (such as third-parties or data brokers). This stage may involve some processing – data collation, pre- processing, analysis, etc. – by the customer. The customer then transfers input data to the provider (request), who analyses that data. The provider returns outputs of that analysis to the customer (response). The return stage may be followed by further processing by the customer or the transmission of data onwards to others, but that activity before the collection stage and after the return stage is not part of the AIaaS chain itself.
Now, let’s delve into the intricate process that forms the backbone of AI as a Service, which we’ll refer to as the AIaaS processing chain. Much like the broader cloud ecosystem (as depicted in Figure 1), the AIaaS processing chain typically involves at least two key entities: the providers of AI services and the application vendors who leverage these services, often referred to as customers or tenants in common cloud terminology. In some cases, third parties may also play a role in this chain. These third parties can be either active users of the services or passive subjects of data processing.
The AIaaS processing chain is a loop in which data flows from customers to providers and then back to customers, potentially involving third parties at various stages. The primary stages of this processing chain include:
- Collection of Input Data and Customer Processing: Customers initiate the process by collecting input data. This data collection can take various forms, such as data input by third parties into the customer’s application, the collection of usage or behavioral data, surveillance of physical spaces using cameras, microphones, or sensors, acquisition of data from data brokers, and more. Importantly, the data collection phase does not necessarily involve storage, aggregation, or temporal delay, and it may not even touch equipment owned or managed by the customer.
- Transfer of Input Data to Providers: Customers transfer the collected input data to providers through a networked API request. This data transfer may occur directly from a customer application’s third-party end-users to the provider, initiated and directed by the customer, without passing through any customer-managed servers. Alternatively, the data may come directly from the customer themselves.
- Analysis by Providers: Providers analyze the transferred data using machine learning systems, applying models trained using datasets curated by the AIaaS provider. These models are usually updated routinely and iteratively by the provider to enhance accuracy and address identified issues. The focus here is on models generally available to customers, rather than models tailored to specific customer requirements.
- Results Returned to Customers: Providers return the results of their analyses to customers over a network as API responses. These results are typically sent to the entity that initiated the request. In some cases, output data may be transferred directly from the provider to third-party end-users of the customer’s application for display or further processing, bypassing the customer’s managed equipment.
Following the return stage, customers may perform their own analyses or other processing of the service’s outputs. Output data or the results of subsequent processing may be used by customer applications to determine subsequent functionalities, displayed on websites or apps, trigger additional AIaaS calls, stored in databases, inform device functionalities, facilitate customer insights into application usage, or be transferred to other entities, among other possibilities.
Additionally, providers may conduct further processing of input data for their own purposes, beyond the core AIaaS processing chain. This additional processing by providers can be broadly categorized into two types:
- Ancillary Processing: This processing is necessary for and directly connected to providing a commercial AI service. It includes activities like billing, security, technical maintenance, and compliance with legal and regulatory requirements.
- Supplementary Processing: This type of processing serves purposes unrelated to the commercial supply of the AI service itself. Examples include improving models or systems, conducting market research, or enforcing aspects of the provider’s terms of service that are not linked to ancillary processing. Supplementary processing often involves using customer data to enhance models, thus improving the functionality of both the provider’s models across their customer base and individual customers’ applications.
Notably, supplementary processing is at the discretion of the provider, as it is not a requirement for delivering the service itself. While cloud providers use information gathered about customers’ usage of their other services to inform product improvement and development (i.e., metadata), AIaaS is unique in that it often involves using the actual customer input data for supplementary processing to enhance core AI models. This drives improvements in the provider’s models and, subsequently, the functionality of customers’ applications.
Precise details about how AIaaS providers use customer data can vary, but available information suggests that major AIaaS providers engage in supplementary processing using customer data to some extent. For instance, Google asks customers to permit the use of their speech recognition inputs for model refinement, while Amazon uses customer data from their Rekognition vision service for model improvement by default.
Microsoft explicitly states that they do not use customer data from certain Azure AI services for supplementary processing, but they may use customer data from other services. However, the extent to which Microsoft engages in supplementary processing with customer data for various AI services remains unclear. Providers may even offer financial incentives to customers who permit access to their data. This type of supplementary processing, driven by customer data, is a distinctive feature of AIaaS, setting it apart from most other cloud services.
This multifaceted AIaaS processing chain represents the complex journey of data from its collection by customers to its analysis by providers, with many potential implications for data privacy, security, ethics, and the functioning of AI-powered applications and services.
Artificial Intelligence as a Service (AIaaS): Bridging the Gap
To address these challenges and make AI more accessible, cloud providers like Amazon, Google, IBM, Microsoft, Salesforce, and SAP have introduced AI services that provide machine learning, deep learning, analytics, and inference capabilities on a subscription basis. These services bring AI capabilities directly from the cloud to organizations, bridging the gap between the potential of AI and its practical implementation.
Start-ups and small to medium-sized enterprises (SMEs) have also entered the AIaaS arena, offering tailored cloud-based AI services designed to meet the specific needs of various industries. For instance, Incomaker provides AI-based sales and marketing automation tools. Collectively, these services are known as Artificial Intelligence as a Service (AIaaS), combining the power of AI with the convenience of cloud computing.
The Essence of AIaaS
AIaaS aims to democratize AI by making it accessible and affordable to organizations of all sizes, regardless of their technological sophistication or budgetary constraints. The core principle of AIaaS is to guide users through the AI development and deployment process without necessitating an in-depth understanding of complex algorithms and technologies.
With AIaaS, users can focus on tasks such as training and configuring their AI models, aligning the technology with their specific requirements, and pursuing their core competencies without being encumbered by the intricacies of installation, maintenance, and related management concerns. This shift in focus allows organizations to harness the power of AI without the need for a dedicated team of AI experts.
Practical Applications of AIaaS
To illustrate the practical application of AIaaS, consider the development of an industrial quality control system based on camera images of manufactured products. In this scenario, an AIaaS platform is utilized to streamline the process. Here’s how it works:
- Image Capture: A camera is deployed in a manufacturing facility to capture images of products as they move along the production line.
- Image Analysis: These images are then transmitted to an AIaaS provider that specializes in computer vision capabilities.
- AI-Powered Assessment: The AIaaS platform processes the images using advanced computer vision algorithms, determining whether the product is in good condition or exhibits defects.
- Real-time Feedback: The assessment results are relayed back to the manufacturing facility in real-time, enabling immediate corrective action if defects are detected.
In this scenario, the developers of the quality control system do not need to grapple with the technical intricacies of computer vision algorithms. Instead, they leverage the expertise of the AIaaS provider, which handles the hardware and configuration decisions, allowing the developers to concentrate on refining their quality control processes.
The Growing Significance of AIaaS Research
Researchers from diverse fields have recognized the growing significance of AIaaS in shaping the technological landscape. A variety of research streams have emerged, addressing critical aspects of AIaaS:
- Design and Evaluation of AI Services: Scholars such as Boag et al. (2018) and Elshawi et al. (2018) have focused on the design and evaluation of AI services offered through the cloud.
- Adoption and Effective Use: Researchers like Zapadka et al. (2020) and Pandl et al. (2021) have explored the adoption and effective utilization of AIaaS, shedding light on best practices and challenges.
- Misuse and Security: Javadi et al. (2020) have delved into the potential misuse of AIaaS by its users, while others like Truex et al. (2019) have investigated security vulnerabilities and issues associated with AIaaS.
As the AIaaS research field expands, it becomes evident that a cohesive framework and clear terminology are essential to foster meaningful progress.
The Terminological Conundrum
While AIaaS has gained traction in both academia and industry, the terminology used to describe this phenomenon remains diverse and fragmented. While the term “artificial intelligence as a service” is not prevalent in the literature, various other terms are employed to characterize similar concepts:
- Machine Learning as a Service (MLaaS): Perhaps the most widely encountered term, MLaaS emphasizes the cloud-based delivery of machine learning capabilities.
- Deep Learning as a Service (DLaaS): This term focuses on cloud-based deep learning solutions, which are a subset of AI technologies.
- Inference as a Service (IaaS): IaaS narrows down the focus to the specific process of making predictions or inferences using AI models.
- Neural Networks as a Service (NaaS): This term highlights the cloud-based provision of neural network-based AI services.
- Analytics as a Service (AaaS): AaaS broadens the scope to include analytical services delivered via the cloud.
The multiplicity of terms stems from the dynamic nature of AIaaS, driven by technological advancements, innovations, and the increasing diversity of offerings in the market. Additionally, these terms predominantly relate to AI software and applications, aligning them with the conventional Software as a Service (SaaS) cloud model, as evident in research (Javadi et al., 2020).
Towards a Unified Conceptualization
To promote clarity and coherence in AIaaS research and practice, it is imperative to develop a uniform conceptualization. Our catchword article aims to contribute to this goal by proposing a comprehensive definition of AIaaS and organizing it into a hierarchical structure comprising three layers:
- Infrastructure Layer: This layer pertains to the cloud-based infrastructure and hardware resources that support AI services. It corresponds to the Infrastructure as a Service (IaaS) model in cloud computing.
- Platform Layer: The platform layer includes services that facilitate the development, deployment, and management of AI models and applications. This aligns with the Platform as a Service (PaaS) model in cloud computing.
- Service Layer: At the top of the hierarchy is the service layer, which encompasses the actual AI capabilities delivered to end-users. This layer is akin to the Software as a Service (SaaS) model in cloud computing.
Core Characteristics of AIaaS
While AIaaS offerings can vary widely, there are common characteristics that define this phenomenon:
- Abstraction of Complexity: AIaaS abstracts the technical intricacies of AI services, enabling users to leverage AI capabilities without requiring deep expertise in AI algorithms.
- Cloud Inheritance: AIaaS inherits cloud computing characteristics, including on-demand provisioning, scalability, and accessibility via the internet.
Defining the AI as a Service Stack
AIaaS is the embodiment of cloud-based systems that offer on-demand services to both organizations and individuals. These services encompass the entire spectrum of AI, from deploying and developing AI models to training and managing them. It’s important to understand that AIaaS extends beyond ready-to-use AI applications like chatbots powered by natural language processing. It also includes the essential tools and resources required for the creation, operation, and maintenance of AI models.
Much like traditional cloud service models, AIaaS is structured into three layers, organized hierarchically based on the level of abstraction they provide. These layers can be seen as a stack, with each layer building upon the one below. The three layers of the AIaaS stack are as follows:
AI Software Services: These services offer ready-to-use AI applications and building blocks, akin to the conventional Software as a Service (SaaS) cloud layer.
AI Developer Services: Situated in the middle layer, these services provide tools and assistance for developers to implement code and unlock AI capabilities. This layer relates to the Platform as a Service (PaaS) cloud layer.
AI Infrastructure Services: At the foundation of the stack are AI infrastructure services, which furnish the raw computational power needed for building and training AI algorithms, as well as network and storage capacities for data storage and sharing. This layer aligns with the Infrastructure as a Service (IaaS) cloud layer.
It’s worth noting that organizations can establish interdependencies among these layers, creating intricate cloud supply chains. While each layer can stand independently, the option to build AI software services on top of AI developer services, which in turn rely on AI infrastructure services, exists. This interplay among layers forms the AIaaS stack.
AI Software Services
Among the most prominent and widely used aspects of AIaaS are AI software services. These services encompass ready-to-use applications and building blocks, reminiscent of the conventional SaaS cloud models. Today, many AI-based systems are built upon machine learning or deep learning techniques, making these methods the cornerstone of popular AI software services.
Inference as a Service (IaaS): One key facet of AI software services is Inference as a Service. Here, users can access pre-trained machine learning models or engage in Machine Learning as a Service (MLaaS), which enables users to create and customize their machine learning models. Inference as a Service simplifies the process of running AI models by automating data storage, classifier training, and classification.
These services come in various forms, including language services (text analytics or translation), analytics services (product recommendations or knowledge inference), speech services (text-to-speech and speech-to-text), and computer vision services (image and video analysis for object identification and labeling). What makes these services particularly valuable is that they empower developers of all levels to harness machine learning technology without the need for extensive data aggregation. Users can leverage the expertise embedded in pre-trained AI models, making AI accessible to those outside the AI domain.
Inference as a Service, however, tends to be a black-box solution, offering limited customization options for AI models or underlying datasets. To address this limitation, MLaaS emerged to cater to knowledgeable users, providing them with greater control and customizability over AI model configurations. MLaaS guides users through the machine learning pipeline, streamlining tasks such as data preprocessing, feature selection, classifier choice, hyper-parameter tuning, and model training.
Enabling Developers with AIaaS
AIaaS extends its reach into the realm of Platform as a Service (PaaS) cloud models, offering developers a wide array of services and tools to facilitate the development and management of AI applications. These services aim to streamline the development process and empower developers, regardless of their level of expertise, to leverage the capabilities of AI.
AI Developer Services Toolbox
Within the AI Developer Services layer, developers have access to a versatile toolbox (Table 2) designed to simplify and accelerate the implementation of AI capabilities in their applications. Some key components of this toolbox include:
AI Frameworks: Open-source AI frameworks like Tensorflow, PyTorch, Caffe, Theano, Horovod, and MXNet are now offered as on-demand services. These frameworks encompass a range of AI algorithms and tools, reducing the effort required for designing, training, and using AI models. For instance, Tensorflow, developed by Google, provides a platform for machine learning with pre-built workflows that expedite model development and training.
Development Tools: AI Developer Services offer development tools such as PyCharm, Microsoft VS Code, Jupyter, and MATLAB. These tools facilitate faster coding and seamless integration of APIs, enabling developers to work efficiently and effectively.
Data Preparation Tools: High-quality data is essential for AI model efficiency. AIaaS providers have responded by offering data preparation tools that assist in data extraction, transformation, and loading. These tools automate the pre-processing and post-processing of data, making it easier for users to prepare data for AI model training and evaluation. This feature is especially valuable to data scientists, allowing them to focus on the data itself.
AI Libraries and SDKs: Developers can access AI libraries and software development kits, which consist of low-level software functions designed to optimize the deployment of AI frameworks on specific infrastructures. These libraries are seamlessly integrated into the source code of AI applications, enabling developers to interact with the service API through pre-defined methods. Examples include libraries for managing data, performing advanced mathematical operations, and adding cognitive capabilities such as computer vision or language translation. These on-demand resources lower the barrier for developers to integrate AIaaS into their existing software products.
In summary, AI Developer Services provide developers with a comprehensive toolkit to simplify the integration of AI capabilities into their applications. These tools and resources cater to both experienced developers and newcomers, fostering innovation and accelerating the adoption of AI in diverse domains.
AI Infrastructure Services
While AI Developer Services equip developers with the necessary tools and frameworks, the foundation of AIaaS lies in AI Infrastructure Services. These services provide the raw computational power required for building and training AI algorithms, along with the essential network and storage capacities needed for data storage and sharing. In essence, AI infrastructure services align with the Infrastructure as a Service (IaaS) cloud model.
Computational Resources
AIaaS users have a broad spectrum of computational resources at their disposal, ranging from physical servers and virtual machines to containers and specialized AI hardware such as Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs). Complex deep learning and neural network tasks often necessitate the use of GPUs or TPUs to expedite calculations. Providers may also offer additional compute services, including batch and stream processing, container orchestration, and serverless computing, to parallelize and automate machine learning steps. Cloud platforms like AWS and Google Cloud provide access to specialized hardware, such as TPUs, designed for training neural networks using frameworks like TensorFlow.
Data Management
Data is the lifeblood of AI-based systems, and AI infrastructure services are equipped to handle its diverse forms and functions. These services grant access to relational and NoSQL databases, allowing users to upload and integrate external data lakes as inputs for training AI models. High-quality training data is essential but often expensive and time-consuming to create, especially when expert annotation is required. AI infrastructure services address this challenge by facilitating efficient data storage and sharing. Data silos can be combined to enhance the accuracy of AI-based systems or enable their application in the first place.
Moreover, AI infrastructure services enable the provision of data as a service. Users can request data via data APIs or web interfaces, with granular authentication and authorization controls and pricing models. This approach democratizes access to high-quality data sets and promotes collaboration across organizations.
Complexity Abstraction
AIaaS simplifies the adoption of AI technologies, making them accessible to organizations, particularly Small and Medium-sized Enterprises (SMEs) lacking the requisite expertise, hardware, or software for AI application development.
Complexity abstraction, a key feature of AIaaS, revolves around concealing the intricate details of AI services, allowing users to offload control and responsibility to the service providers.
Complexity abstraction applies to all layers of the AIaaS stack, streamlining the development process and reducing the time-to-market for AI applications. Users are freed from the complexities of planning, developing, and configuring hardware and developer tools. AIaaS offers a ready-to-use platform, enabling organizations to concentrate on their core business while harnessing the benefits of AI.
Hardware Abstraction: AIaaS eliminates the need for users to manage their hardware resources, which is crucial for efficient AI model execution. AIaaS providers optimize hardware configurations, including CPUs, GPUs, and specialized AI hardware like TPUs, ensuring optimal performance and cost-efficiency.
Simplified Setup and Configuration: Setting up and configuring AI infrastructure is challenging and time-consuming. AIaaS spares users from this complexity, allowing them to focus on AI model development while delegating setup and configuration tasks to service providers.
Maintenance Offloading: AIaaS transfers the burden of maintaining AI frameworks and libraries to the provider, a particularly daunting task due to the frequent updates and changes in the open-source AI community.
Automation
Automation is a fundamental characteristic of AIaaS, impacting each layer of the AIaaS stack. It empowers users to optimize AI models, select suitable hardware architectures, and handle hardware and software failures seamlessly.
Automated Model Optimization: AI software and developer services automate classifier selection and hyper-parameter tuning, reducing the complexity of optimizing AI models. Users can rely on automated algorithms to choose optimal classifiers, significantly simplifying the model development process.
Hardware Optimization: AI infrastructure services automatically adapt hardware configurations to match the unique requirements of AI algorithms. Users benefit from the efficient allocation of resources, selecting the ideal hardware for their specific AI model needs.
Resilience through Automation: AIaaS handles failures in the infrastructure and software stack automatically. This is crucial for AI-based systems, where lengthy training processes can be disrupted by failures. AIaaS retries failing tasks and provides meaningful error messages, enhancing system resilience.
Customizability
AIaaS strikes a balance between catering to organizations with limited AI expertise and offering advanced users the ability to customize, configure, and control AI models to their specific needs.
Optimal Model Configuration: Customizability allows users to experiment with classifier selection and hyper-parameter tuning, optimizing AI models for their unique datasets. Users can select the most suitable configurations for their specific use cases.
Configurable AI Infrastructure: AI infrastructure services offer an architecture that can be extended and customized. Users can integrate custom algorithms, third-party services, and modules into the workflow, enhancing flexibility and control.
Community and Collaboration: Extendable architectures foster collaboration within AI communities, enabling users to collectively enhance AIaaS functionalities. Integration with major cloud-based AI infrastructures encourages customization and tailoring to specific requirements.
Inherited Cloud Characteristics
AIaaS leverages the inherent characteristics of cloud computing, adding to its appeal and utility.
On-Demand Self-Service: Users can provision AI capabilities and resources without human intervention, enabling easy scalability and resource management. Trial subscriptions allow users to test AIaaS offerings before committing.
Resource Pooling: AIaaS leverages resource pooling to support parallel computations and accommodate thousands of users concurrently. This is particularly valuable for parameter configuration, classifier selection, and resource sharing.
Scalability: AIaaS offers scalability by elastically provisioning and releasing hardware resources as per user-defined configurations. This flexibility caters to the evolving hardware requirements of AI applications.
Broad Network Access: AIaaS is accessible over the network through standardized APIs and user-friendly interfaces, simplifying integration into existing products and workflows.
Measured Service: Users benefit from transparent resource usage monitoring, control, and reporting, leading to cost-effective ‘pay-as-you-go’ pricing models. This predictability is especially advantageous for SMEs and organizations with limited AI expertise.
In this chapter, we delve into these challenges and highlight the critical areas for future research within the Business and Information Systems Engineering (BISE) community.
Challenges Inherited from AI and Cloud Computing
Racial Bias: AI in the healthcare industry has been marred by issues of racial bias (Obermeyer et al. 2019). AIaaS inherits these concerns, emphasizing the need for robust mitigation strategies to address bias in AI systems, particularly in critical domains like healthcare.
Lack of Control: AIaaS shares common challenges with cloud computing, including concerns about users’ limited control over their AI systems (Weinhardt et al. 2009; Trenz et al. 2019). The challenge lies in striking a balance between user empowerment and ease of use.
Security Concerns: Security is a shared responsibility between AIaaS users and service providers. Ensuring the secure integration of AI models and data within the cloud environment presents new challenges that require comprehensive solutions (Trenz et al. 2019).
Black-Box Perception: AIaaS often suffers from a “black-box” perception, where the inner workings of AI models are hidden from users. This opacity erodes trust, accountability, and explainability (Javadi et al. 2020; Pandl et al. 2021).
Novel Socio-technical Challenges in AIaaS
Trustworthy AI (TAI) Requirements: To gain user trust and ensure ethical AI practices, AIaaS must adhere to Trustworthy AI (TAI) guidelines, such as those issued by the European Union (European Commission 2019). TAI demands that AI is developed, deployed, and used ethically, adhering to laws and principles.
Empowering Human Agency and Oversight: Requirement #1 demands that AIaaS empowers users to make informed decisions and exercise control over the systems they employ. However, providing control without compromising usability is a delicate balance (Yao et al. 2017).
Ensuring Technical Robustness and Safety: Requirement #2 necessitates that AIaaS is resilient, secure, accurate, reliable, and reproducible. Despite being perceived as more resilient than in-house AI, AIaaS still faces reliability challenges (Fig. 3).
Interoperability of AIaaS: AIaaS operates within a complex ecosystem of providers, stakeholders, and users, making interoperability crucial. Achieving effective cloud interoperability is a research area that needs attention, including the development of standards and best practices (Fig. 3).
(adapted from Floerecke et al. 2020 -link.springer.com/article/10.1007/s12599-021-00708-w#ref-CR20)
Future Research Directions for BISE
Bias Mitigation: Research should focus on developing bias detection and mitigation techniques specific to AIaaS to ensure fair and equitable outcomes across diverse user groups.
Transparency and Explainability: Future work should explore methods to increase the transparency and explainability of AIaaS systems, allowing users to understand the decision-making processes of these systems.
Security and Reliability: Research must address security concerns unique to AIaaS, with a focus on safeguarding sensitive data and enhancing system reliability, especially among smaller AIaaS providers.
User-Centric Design: Investigate user-centric design principles for AIaaS to strike a balance between user control and system ease of use, ensuring usability for users with varying levels of expertise.
Ethical Considerations: BISE researchers should delve into the ethical implications of AIaaS, creating frameworks for ethical AI development, deployment, and use.
Model Diversity and Fairness: Research is needed to tackle the challenge of providing fair and accurate AIaaS services that cater to diverse user groups and cultures, especially concerning pre-trained and transferred models.
Data Privacy and Governance: Explore innovative methods for safeguarding data privacy in AIaaS, including the use of encryption, blockchain, and third-party certifications to ensure compliance with privacy regulations.
However, what happens if these AI service providers face operational blocks?
The Significance of AI Service Providers
AI service providers are the backbone of AI adoption across various sectors. They offer a wide range of services, including natural language processing, image recognition, predictive analytics, and more. These services underpin applications such as virtual assistants, autonomous vehicles, fraud detection, and medical diagnosis. Given their widespread influence, any disruption to AI service providers can have cascading effects on businesses, economies, and society at large.
Understanding the Risks
Before addressing how to mitigate these risks, it’s essential to understand the potential challenges associated with AI service provider disruption:
- Economic Impact: A significant disruption could lead to financial losses for companies and investors. The sudden halt of AI-dependent services can disrupt supply chains, customer relations, and financial markets.
- Safety Concerns: In critical applications like autonomous vehicles and healthcare, AI disruptions could result in accidents, misdiagnoses, and other life-threatening situations. Ensuring safety remains paramount.
- Privacy and Security: Breaching AI service providers can lead to unauthorized access to sensitive data. Privacy violations and data misuse are major concerns.
- Trust Erosion: Repeated disruptions can erode public trust in AI technologies and service providers, hindering the broader adoption of AI solutions.
Mitigating the Risks
To address these risks effectively, a multifaceted approach is necessary. Below are strategies for mitigating the risks associated with AI service provider disruption:
Robust Cybersecurity Measures
- Continuous Monitoring: AI service providers should invest in real-time monitoring of their systems to detect unusual activities and potential vulnerabilities promptly.
- Encryption: Implement strong encryption techniques to protect data both at rest and in transit.
- Access Controls: Enforce strict access controls to limit who can access critical systems and data.
- Redundancy and Backups: Maintain redundant infrastructure and data backups to ensure quick recovery in the event of a disruption.
Regulatory Frameworks
- Government Oversight: Governments should establish comprehensive regulations for AI service providers, mandating stringent security standards and protocols.
- Data Protection Laws: Enforce robust data protection laws to safeguard user data and hold companies accountable for data breaches.
- Penalties for Non-Compliance: Impose significant penalties for AI service providers that fail to meet security and privacy standards, incentivizing compliance.
Diversification and Interoperability
- Diverse Service Providers: Encourage organizations to diversify their AI service providers, reducing dependency on a single entity.
- Interoperability Standards: Promote the development of interoperability standards to ensure that AI systems can seamlessly switch between providers in case of disruption.
Public-Private Collaboration
- Information Sharing: Foster collaboration between AI service providers, governments, and other stakeholders to share threat intelligence and best practices.
- Joint Response Plans: Develop coordinated response plans that outline actions to be taken in the event of a disruption.
5. Education and Awareness
- User Training: Educate users and organizations on best practices for securing AI-dependent systems and data.
- Transparency: Promote transparency in AI systems, allowing users to understand how their data is used and secured.
6. Ethical AI Development
- Bias Mitigation: Ensure AI models are trained and deployed without bias, reducing the potential for harm and discrimination.
- Ethical Guidelines: Establish clear ethical guidelines for AI service providers to follow in their development and deployment processes.
Conclusion
Mitigating the risks associated with AI service provider disruption is crucial to ensuring the continued growth and adoption of AI technologies. With the right combination of cybersecurity measures, regulatory frameworks, diversification strategies, collaboration, education, and ethical development, we can create a secure and trustworthy AI-powered future. It is a collective responsibility, involving governments, businesses, and individuals, to safeguard the potential benefits of AI while minimizing its risks.
reference link:
https://link.springer.com/article/10.1007/s12599-021-00708-w
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3824736
[…] Artificial intelligence as a service: What problems does the future hold… […]