Beyond Boundaries: The Promise Of Conversational AI In Healthcare

conversational ai in healthcare

In the study by Cheng et al [79], users responded positively, particularly to features of conversational agents that allowed for personalization and the conversational agent’s ability to understand and respond to natural conversation flow. Some difficulties included learning commands, restricted answer options, slow processing speed, and some problematic responses [79]. Lobo et al [55] reported user acceptability in the form of usability, where the conversational agent had a system usability score of 88, which was considered very good. Griol et al [62] considered the Alzheimer patients’ caregiver’s perspective when judging the acceptability of the conversational agent. The global rate for the system (on a scale from 0 to 10) was 8.6, and the application was thought to be attractive, adequate, and appropriate for its purpose. In another study, Griol et al [63] employed an emotionally sensitive conversational agent for chronic respiratory disease patients who rated this agent significantly higher for interaction rate, frequency, and empathy than the baseline version.

Automation of these tasks could free clinicians to focus on more complex work and increase the accessibility to health care services for the public. An overarching assessment of the acceptability, usability, and effectiveness of these agents in health care is needed to collate the evidence so that future development can target areas for improvement and potential for sustainable adoption. The criteria included primary research studies that focused on consumers, caregivers, or healthcare professionals in the prevention, treatment, or rehabilitation of chronic diseases using CAs, and tested the system with human users. Reviews, perspectives, opinion papers, or news articles were excluded based on exclusion criteria. In addition, studies that reported on evaluations based on human users interacting with the entire health system were excluded.

Health-focused apps with chatbots (“healthbots”) have a critical role in addressing gaps in quality healthcare. There is limited evidence on how such healthbots are developed conversational ai in healthcare and applied in practice. Our review of healthbots aims to classify types of healthbots, contexts of use, and their natural language processing capabilities.

As AI-powered chatbots become more prevalent in healthcare settings, there is a risk that sensitive patient information could be accessed or shared without proper consent or security measures in place. This could result in serious consequences for patient confidentiality and trust in the healthcare system. Furthermore, AI can help to proactively ensure that patient data is up-to-date, prompting users to fill in missing or outdated information. Such advanced Conversational AI systems not only lead to a more organized healthcare establishment but also offer patients a smoother, more responsive experience. The CAs in the papers used various AI methods such as speech recognition, facial recognition, and NLP.

This technology can assist with tasks such as scheduling appointments, reminding patients of medication times, answering medical inquiries, providing healthcare information, and more. Furthermore, by watching and evaluating how patients interact with the conversational AI system, healthcare providers may immediately fix any gaps in care. The questions patients ask can reveal a lot about their degree of medical literacy, whether they find certain parts of attending the clinic challenging, and so on. This might help you determine what kind of information you should put in front of patients and what you should leave out to make their encounters more pleasant and enlightening.

Characteristics of Included Studies

In the long term, Conversational AI can serve as a virtual ‘healthcare consultant’ at any point in time – answering questions that millions of people across the globe have about major and minor health-related issues on a daily basis. In this regard, a conversation with an AI Assistant would efficiently substitute the initial phone call you might make to your doctor to discuss your concerns, before making an in-person appointment. These days, healthcare professionals are over-stretched at work and have to deal with hundreds of tasks when at work. But sometimes, technology makes things more complicated for them and results in costly errors than helping them as expected.

As with any technology, there are both ethical and practical considerations that need to be taken into account before widespread adoption. Missed appointments, delayed vaccinations, or forgotten prescriptions can have real-world health implications. Conversational AI, by sending proactive and personalized notifications, ensures that patients are always in the loop about their healthcare events.

Most of these studies were papers describing the development and initial evaluation of conversational agents, and half of them did not have participants [40,44,55]. Initially, studies that did not have an explicit design were classified as qualitative or interpretative studies. Therefore, these studies were coded as other and assessed using the AXIS tool for cross-sectional studies, which was deemed to provide the most systematic evaluation of the various elements of the studies [30]. The quality of these studies was assessed as best as possible; however, the judgments should be considered in the context of these limitations.

Summary of the quality assessment and judgments of cohort studies using the CASP (Critical Appraisal Skills Programme) Cohort Study Checklist. If a patient seems discontented or their issues are too complex, the AI ensures a smooth transition to a human agent. This blend of technology and human touch ensures that patients always feel heard and valued. One of the hallmarks of modern healthcare is ensuring patient autonomy and ease of access. Conversational AI, by enabling features like MyChart account creation and password reset, serves this exact purpose.

How secure is patient data with AI and chatbot use?

Furthermore, the high frequency of publication indicates the feasibility and support to conduct research successfully in this area. The literature on conversational agents in health care is largely descriptive and aimed at treatment and monitoring and health service support. It mostly reports on text-based, artificial intelligence–driven, and smartphone app–delivered conversational agents.

Two studies discussed conversational agents for the management of obesity in younger patients, including adolescents [46,50]. They largely employed a coach-like conversational agent to promote physical activity [51] and healthy eating [52], sometimes with incentive provision, and provided techniques on how to reverse obesity [30,47,49,71]. The validity of the evidence extracted from the included studies was also affected by limitations in the structure of this review. The SF/HIT was used to provide a structured set of whole system implementation outcomes to evaluate the conversational agents [31]. However, an issue with the use of this framework, which was discovered during analysis, was that many of the included studies were describing system innovation. Therefore, they did not address or provide evidence for many of the outcomes described by the SF/HIT.

AI tools are instrumental in reducing the administrative burden on healthcare providers. Scheduling appointments, managing patient records, and processing insurance claims become more efficient. Babylon Health’s AI system exemplifies this by handling patient triage and preliminary consultations, freeing up valuable time for healthcare professionals. Now, if NLP allows the system to understand and reply back in human language, machine learning, a set of techniques that enables machines to learn from past and current data, optimizes processes for more accurate results.

Out of 26 conversational agents, 16 were chatbots (a computer program that simulates human conversation via voice or text communication). Seven were embodied conversational agents (ECA), a virtual agent that appeared on computer screens and was equipped with a virtual, human-like body that had real-time conversations with humans. One was a conversational agent in a robot, and another was a relational agent explicitly designed to remember history and manage future expectations in their interactions with users. The characterisation of conversational agents are as shown in Table 3, and this summarization is adapted from Laranjo et al. 2018 [27].

The initial database searches yielded 11,401 records, and another 28 records were retrieved through additional sources such as the gray literature sources and screening of reference lists of relevant studies. A total of 196 duplicates were identified and removed, leaving 11,233 titles and abstracts that needed to be screened. Title and abstract screening led to the exclusion of 11,099 records, resulting in 134 full texts that needed to be assessed for eligibility. Of these, 87 articles were excluded, resulting in a final pool of 47 reports comprising 45 studies and 2 ongoing trials (Figure 2). The search initially yielded 2293 apps from both the Apple iOS and Google Play stores (see Fig. 1).

In addition, we excluded articles concerning ECAs, relational agents, animated conversational agents, or other conversational agents with a visual or animated component. To our knowledge, our study is the first comprehensive review of healthbots that are commercially available on the Apple iOS store and Google Play stores. Another review conducted by Montenegro et al. developed a taxonomy of healthbots related to health32. Both of these reviews focused on healthbots that were available in scientific literature only and did not include commercially available apps. Our study leverages and further develops the evaluative criteria developed by Laranjo et al. and Montenegro et al. to assess commercially available health apps9,32. While healthbots have a potential role in the future of healthcare, our understanding of how they should be developed for different settings and applied in practice is limited.

When AI chatbots are trained by psychology scientists by overseeing their replies, they learn to be empathic. Conversational AI is able to understand your symptoms and provide consolation and comfort to help you feel heard whenever you disclose any medical conditions you are struggling with. Intelligent conversational interfaces address this issue by utilizing NLP to offer helpful replies to all questions without requiring the patient to look elsewhere.

Post-treatment Care

While we live in an Internet-backed world with easy access to information of all sorts, we are unable to get personalized healthcare advice with just an online search for medical information. This is where conversational AI tools can be put to use to check symptoms and suggest a step-by-step diagnosis. It can lead a patient through a series of questions in a logical sequence to understand their condition that may require immediate escalation. At times, getting an accurate diagnosis following appointment scheduling is what a patient needs for further review. AI and chatbot integration in healthcare refers to the application of Artificial Intelligence and automated response systems (chatbots) within the healthcare sector.

These systems may be used as step-by-step diagnosis tools, guiding users through a series of questions and allowing them to input their symptoms in the right sequence. The benefit is that the AI conversational bot converses with you while evaluating your data. The platform provides the flexibility to update dialogues, conversational flows, and responses as required. The GUI can also analyze and process the necessary data for the chatbot to function as it should and deliver actionable business insights based on bot data analytics.

Second, how do users rate the usability and satisfactoriness of the conversational agents, and what specific elements of the agents do they like and dislike? Finally, what are the current limitations and gaps in the utility of conversational Chat PG agents in health care? These objectives build on previous systematic reviews while widening the scope of included studies to update the body of knowledge on conversational agents in health care and to inform future research and development.

Beyond Boundaries: The Promise Of Conversational AI In Healthcare – Forbes

Beyond Boundaries: The Promise Of Conversational AI In Healthcare.

Posted: Wed, 31 Jan 2024 08:00:00 GMT [source]

It should be noted that the AXIS tool used to assess the other studies was designed for cross-sectional studies and does not fit exactly with the designs of these studies. Therefore, it is possible that these studies would perform better when assessed by a tool specific to their study type. Tables depicting the judgments for each question of the CASP cohort and qualitative checklists and the AXIS tool for the cross-sectional and other studies are included in Multimedia Appendices 6-9 [8,12,14,15,32,33,35,36,38-45,48,50-52,54-56]. One study discussed a condition-specific conversational agent application targeted at improving the quality of life and medication adherence of breast cancer patients [53]. Participants implied a positive experience when interacting with the conversational agent, whereby 88% said it provided them with support in tracking their treatment and mentioned that they would recommend the conversational agent to their friends. We used the principles of thematic analysis to analyze the content, scope, and personality traits of the conversational agents.

While there were 78 apps in the review, accounting for the multiple categorizations, this multi-select characterization yielded a total of 83 (55%) counts for one or more of the focus areas. For both text-based and voice-based systems, it is the data that https://chat.openai.com/ empowers the underlying engine to deliver a satisfactory response. The information also acts as a goldmine for valuable insights that healthcare service providers can utilise to improve the quality of care offered and the overall patient experience.

Woebot, a chatbot therapist developed by a team of Stanford researchers, is a successful example of this. Another significant aspect of conversational AI is that it has made healthcare widely accessible. People can set and meet their health goals, and receive routine tips to lead a healthy lifestyle. Haptik’s AI Assistant, deployed on the Dr. LalPathLabs website, provided round-the-clock resolution to a range of patient queries. It facilitated a seamless booking experience by offering information about nearby test centers, and information on available tests and their pricing.

A powerful tool for disseminating accurate and essential information to those who need it would definitely be a great asset, and that’s where Conversational AI can help. Patients often undergo periodic checkups with a doctor for post-treatment recovery consultation. However, if they fail to understand instructions in their post-care plan, it can worsen their recovery and may have side effects on health. This is where they need a system that can bridge the communication gap and support them during recovery. It allows patients to schedule appointments without feeling frustrated to use a complicated interface. In addition, they can also reschedule or cancel appointments easily if needed to eliminate the risk of scheduling conflicts.

Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being

Inkster et al [61] employed the Patient Health Questionnaire-9 self-reported depression scale to note significant improvements in depression scores in the high user group compared with the low user group [61]. In the study by Kamita et al [67], the counseling bot encouraged significant improvements in users’ self-esteem, anxiety, and depression compared with the control condition. Besides effectiveness, user ratings of acceptability, using the technology acceptance model, were higher in the conversational agent condition compared with the control [67]. Gaffney et al [57] proposed a conversational agent MYLO that was significantly better than the existing conversational agent ELIZA in problem solving and helpfulness, but both were equally effective in lowering distress. Miner et al [84] compared Apple’s Siri, Microsoft’s Cortana, Samsung’s S Voice, and Google Now on their abilities to respond to questions about mental health, interpersonal violence, and physical health.

conversational ai in healthcare

The Cochrane Collaboration risk-of-bias tool was used to evaluate the risk of bias in randomized controlled trials (RCTs) [28]. The CASP (Critical Appraisal Skills Programme) tools for cohort and qualitative studies were used for the respective studies [29], and the Appraisal tool for Cross-Sectional Studies (AXIS) tool was used to assess the quality of cross-sectional survey studies [30]. Generative AI in healthcare offers the potential to formulate personalized treatment plans by analyzing vast patient datasets.

By augmenting human activities and capabilities, conversational AI for healthcare can unleash immense improvements in healthcare quality, accessibility, and costs. This is why Accenture expects that the market for AI technology in healthcare will grow from $600 million in 2014 to over $6.6 billion by 2021. At Mass General Brigham (MGB), a not-for-profit integrated healthcare system and national leader in medical research, teaching, and patient care, the CIO is on a mission to combat burnout among the healthcare provider’s 77,000 employees.

Eligibility criteria and screening

It can identify patterns and trends that can help in disease diagnosis, drug discovery, patient care, and more. However, if the patient misunderstands a post-care plan instruction or fails to complete particular activities, their recovery outcomes may suffer. A conversational AI system can help overcome that communication gap and assist patients in their healing process. For example, the patient could submit information regarding what post-care steps they have taken and where they are in their treatment plan.

It utilizes advanced artificial intelligence, machine learning, and natural language processing to comprehend free-form human speech or text input. This is vastly more sophisticated than simple chatbots, which rely on scripted responses. Future reviews of conversational agents in health care could be extended to include constrained NLP and non-NLP conversational agents. Some students who used the virtual patients also reported that it was difficult to empathize [50] and that the agent did not sufficiently encompass real situational complexity [15].

conversational ai in healthcare

If the study did not address the outcome in question, it was coded as neutral or negative. In addition to these use cases, there’s growing interest in using conversational AI for mental health support, chronic disease management, and patient education. As the technology advances and integrates more seamlessly into healthcare operations, its applications will likely continue to expand. Section 2 presents methods explaining the search strategy, eligibility criteria, screening, and data extraction processes.

Our discussion has highlighted both the pros and cons of implementing Conversational AI in a healthcare organization and explored its role in improving patient experience, customer service, and engagement. By ensuring such processes are smooth, Conversational AI ensures that patients can access their health data without unnecessary obstacles, promoting a sense of ownership and trust in the healthcare system. With this technology, patients can effortlessly request prescription refills, access their test results, and get details about their medications. By ensuring patients have this information at their fingertips, Conversational AI fosters a sense of autonomy and control over one’s health, making them more engaged in their healthcare journey with a human-like conversation.

  • Healthcare organizations implementing conversational AI are seeing improvements in efficiencies, access, costs, retention, and patient satisfaction by automating routine administrative tasks, freeing up staff for higher-value work, and more.
  • By analyzing patient language and sentiments during interactions, it can gauge a patient’s emotional state.
  • There were a wide variety of areas of health care targeted by the conversational agents of the included studies.
  • Many different types of conversational agents that use NLP have been developed, including chatbots, embodied conversational agents (ECAs), and virtual patients, and are accessible by telephone, mobile phones, computers, and many other digital platforms [7-10].

Combined with conversational AI, it promises to elevate the patient experience, merging immediate communication with tailored healthcare insights. Table 2 has retained the characteristics of the CAs (the categories defined in Box 1) that were evaluated in the included studies. In addition, it shows AI methods used, based on a list of keywords, defined from three systematic literature reviews for CAs in health care [1,6,27]. The study focused on health-related apps that had an embedded text-based conversational agent and were available for free public download through the Google Play or Apple iOS store, and available in English. You can foun additiona information about ai customer service and artificial intelligence and NLP. A healthbot was defined as a health-related conversational agent that facilitated a bidirectional (two-way) conversation.

They can also deliver specific information about specific actions to be taken to meet those goals, hence prompting patients to feel engaged. A conversational AI-based chatbot can answer FAQs and help troubleshoot common issues contrary to the limited capabilities of a conventional chatbot. As per WHO statistics, the world is facing a shortage of 4.3 million doctors, nurses, and other healthcare staff.

In the US, about 60% of adults have chronic diseases, causing the annual health care expenditure approximately 86.2% of the $2.6 trillion [20]. In 2018, the Australian Institute of Health and Welfare claims that diabetes is one of Australia’s eight common chronic conditions, contributing to 61% of the disease burden, 37% of hospitalizations, and 87% of deaths [21]. There are about 1.13 billion people who had suffered from hypertension in 2015, and the number is still increasing. All statistics about chronic conditions show how serious they are and their effect on people’s lives [19].

Siri responded appropriately and empathetically to issues concerning depression and physical health, and Cortana responded appropriately and empathetically to matters involving interpersonal violence [84]. Future research should also include more qualitative evaluations of the features that users like and dislike. Only half (18/31) of the studies included in this review reported specific user feedback, despite the fact that 7 of the remaining 13 studies included some measure of usability or user perceptions. It will be important to identify all of the structural, physical, and psychological barriers to use if conversational agents are to achieve their potential for improving health care provision and reducing the strain on health care resources. To this end, it would be useful for future studies to structure their evaluation of conversational agents around a behavioral change framework (eg, the Behavior Change Wheel framework [59]). This is important not only when evaluating the effectiveness of behavior change-focused conversational agents, but also when determining whether and how the adoption of new conversational agent technology will be successful.

The underlying technology that supports such healthbots may include a set of rule-based algorithms, or employ machine learning techniques such as natural language processing (NLP) to automate some portions of the conversation. Innovative conversational artificial intelligence (AI) powered systems have been gaining momentum in the healthcare industry in recent years. Automated artificial intelligence programs are built with the purpose of allowing effective communication by providing an interface between the computer and the user. Conversational AI are making a significant impact on the healthcare industry for both medical health providers and patients. Several natural language processing (NLP) platforms, in particular using natural language understanding (NLU), such as Google Dialogflow, IBM Watson and Rasa are used in conversational AI. This paper intends to present an architecture adopted to deploy a successful conversational AI agent, named Ainume, using Google Dialogflow on the Google Cloud Platform (GCP).

All disagreements were discussed, and if a consensus was not reached, a third reviewer was consulted. It’s being utilized for scheduling appointments, guiding post-treatment care, providing patient support, sending reminders, and even handling billing issues. While it offers efficiency and round-the-clock service, ensuring data privacy and ethical considerations remains crucial during its deployment. The purpose of AI chatbots in healthcare is to manage patient inquiries, provide crucial information, and arrange appointments, thereby allowing medical staff to focus on more urgent matters and emergencies.

When we talk about the healthcare sector, we aren’t referring solely to medical professionals such as doctors, nurses, medics, etc., but also to administrative staff at hospitals, clinics, and other healthcare facilities. The efficiency of AI in screening and analysis makes it economically viable to pursue treatments for rare or neglected diseases. These conditions often do not receive the same level of attention in traditional drug discovery due to the high costs and lower financial incentives. Once potential ‘hits’ are identified (compounds that show desired activity against a biological target), the AI can also assist in the ‘lead optimization’ process.

The underlying ML technology means that the chatbot undergoes consistent “training” to become smarter and more intelligent over time. Wellstar, the largest and most integrated healthcare system in the state of Georgia, leverages conversational AI to enable employees to self-service their own IT support needs. The healthcare provider deployed Moveworks’ copilot, affectionately called WALi, on their messaging platform.

In healthcare app and software development, AI can help in developing predictive models, analyzing health data for insights, improving patient engagement, personalizing healthcare, and automating routine tasks. Data security is a top priority in healthcare, and AI and chatbot platforms should adhere to HIPAA guidelines and other relevant data protection regulations. However, it’s important to ensure that any AI or chatbot tool used is from a trusted source and complies with all necessary security regulations. Healthcare providers must guarantee that their solutions are HIPAA compliant to successfully adopt Conversational AI in the healthcare industry.

Included studies that evaluated conversational agents reported on their accuracy (in terms of information retrieval, diagnosis, and triaging), user acceptability, and effectiveness. Some studies reported on more than 1 outcome, for example, acceptability and effectiveness. In general, evaluation data were mostly positive, with a few studies reporting the shortcomings of the conversational agent or technical issues experienced by users. Seventeen studies presented self-reported data from participants in the form of surveys, questionnaires, etc. In 16 studies, the data were objectively assessed in the form of changes in BMI, number of user interactions, etc.

Similarly, it noted that high overall satisfaction was generally reported by the studies, but that the most common issues with conversational agents related to language understanding or poor dialogue management, which is consistent with our findings [2]. Some of this similarity in results is likely because of the overlap in included studies; 7 of their 17 included studies were also included in our review [2]. In contrast, smart conversational agents do not respond with preprepared answers but with adequate suggestions instead. This is enabled by machine learning, a type of artificial intelligence (AI), which allows for broadening of the computer system’s capacity through its learning from data (in this case conversations) without being explicitly programmed [2,6]. The process whereby the machine translates human commands into a form in which the computer can understand, process, and revert to the user is called natural language processing (NLP) [6] and natural language understanding or interpretation [6,7]. This degree of programming allows for personalized conversational agents to be generated.

And A.B.K. All authors have read and agreed to the published version of the manuscript. Next, the titles and abstracts for each paper were exported from the reference manager into an Excel spreadsheet. This research is supported by the Ageing Research Institute for Society and Education (ARISE), Nanyang Technological University, Singapore.

For hospitals and healthcare centers, conversational AI helps track and subsequently optimize resource allocation. Fundamentally, scheduled appointments help reduce patient wait times and improve satisfaction. For instance, it can issue reminders for critical actions to patients after they have submitted the details of post-care actions followed.

The latter was particularly important from a customer experience standpoint, given that there is understandably a lot of anxiety that surrounds an impending test report, which makes a swift response all the more appreciated. Conversational AI, on the other hand, allows patients to schedule their healthcare appointments seamlessly, and even reschedule or cancel them. Besides this, conversational AI is more flexible than conventional chatbot and will not come up with a blank response if the symptom descriptions vary between users. On a daily basis, thousands of administrative tasks must be completed in medical centers, and while they are completed, they are not always done properly. Employees, for example, are frequently required to move between applications, look for endless forms, or track down several departments to complete their duties, resulting in wasted time and frustration. While the phrases chatbot, virtual assistant, and conversational AI are sometimes used interchangeably, they are not all made equal.

Additionally, as the included data indicated a self-reported impact in the studies of effectiveness, the study effectiveness is biased favorably toward the authors’ reporting of impact. Other features of the agents that users reported liking were the reminders and assistance in forming routines [37,48] and that the agents provided accountability [13,34,48], facilitated learning [13,34,37], and were easy to learn and use [8,15]. In the included studies, 3 of the conversational agents were virtual patients, and users in all 3 studies reported liking that it provided a platform for risk-free learning because they were not practicing on real patients [15,41,50]. This systematic review aims to assess the effectiveness and usability of conversational agents in health care and identify the elements that users like and dislike to inform future research and development of these agents. Many studies in this review showed some positive evidence for the usefulness and usability of AI CAs to support the management of different chronic diseases. The overall acceptance of CAs by users for the self-management of their chronic conditions is promising.

Two researchers familiarized themselves with the literature identified, generated the initial codes in relation to personality and content analysis, applied the codes to the included studies, compared their findings, and resolved any discrepancies via discussion. Conversational AI is powering many key use cases that impact both care givers and patients. Patients should be informed about how their data is being used, the benefits of AI analysis, and given the option to opt-out if they so choose. All quality assessments were conducted by 2 independent reviewers, with disagreements resolved by consensus. As there was a wide variety of study designs, the study types were classified by 1 reviewer and validated by a second reviewer, with disagreements being resolved by discussion with a third reviewer. As the broad inclusion criteria were intended to capture all relevant studies, a few of the included studies used implementation models for artificial AI research that were beyond the scope of classic public health design methods.

Smart conversational agents have the potential to undertake more complex tasks that involve greater interaction, reasoning, prediction, and accuracy. Although the technology behind smart conversational agents is continuously developed, they currently do not have full human-level language abilities, resulting in misunderstanding and users’ dissatisfaction [8]. Furthermore, as machine learning algorithms develop, it is becoming increasingly challenging to keep track of their development, evolution, and the reasoning behind their responses.

Another challenge with Conversational AI in healthcare is the potential for errors or misdiagnosis. While AI chatbots can help to improve patient engagement and communication, they may not always provide accurate or appropriate medical advice in real time. There is also the issue of language barriers and cultural differences, which can limit the effectiveness of AI chatbots in becoming medical professionals in certain contexts. In summary, the benefits of Conversational AI in healthcare are numerous and diverse, playing a key role in improving patient engagement and transforming healthcare delivery. By leveraging the power of AI-powered chatbots healthcare providers can offer better patient care, further healthcare outcomes, improve operational efficiency, and save costs in the long run. A health care administrator or professional was available via the conversational agent for the user to communicate with in some studies.

In the next three to four years, as AI systems improve, the focus will inevitably shift toward making these virtual assistants more human at work. During the screening process, studies of conversational agents that were not capable of interacting with human users via unconstrained NLP were excluded. These included conversational agents that only allowed users to select from predefined options or agents with prerecorded responses that did not adapt to subsequent user responses. The basis for this exclusion is that, without the capability of using NLP, computational methods and technologies are rudimentary and do not advance the aims of AI for autonomous computational agents. As many studies did not explicitly state whether the investigated agent was capable of NLP, a description in the paper of the conversational agent allowing free-text or free-speech input was used as an indicator for NLP, and these studies were included.…