Skip to main content

Regulating professional ethics in a context of technological change

Abstract

Background

Technological change is impacting the work of health professionals, especially with recent developments in artificial intelligence. Research has raised many ethical considerations respecting clinical applications of artificial intelligence, and it has identified a role for professional regulation in helping to guide practitioners in the ethical use of technology; however, regulation in this area has been slow to develop. This study seeks to identify the challenges that health professionals face in the context of technological change, and whether regulators’ codes of ethics and guidance are sufficient to help workers navigate these changes.

Methods

We conducted mixed methods research in Ontario, Canada, using qualitative content analysis of regulators’ codes of ethics and practice guidance (26 regulators, 63 documents analysed), interviews with 7 representatives from 5 health profession regulatory bodies, and focus groups with 17 healthcare practitioners across 5 professions in the province. We used thematic analysis to analyse the data and answer our core research questions.

Results

We find that codes of ethics focus more on general principles and managing practitioners’ relationships with clients/patients; hence, it is not clear that these documents can successfully guide professional practice in a context of rapid technological change. Practitioners and regulatory body staff express ambivalence and uncertainty about regulators’ roles in regulating technology use. In some instances, health professionals experience conflict between the expectations of their regulator and their employer. These gaps and conflicts leave some professionals uncertain about how to practice ethically in a digital age.

Conclusions

There is a need for more guidance and regulation in this area, not only for practitioners, but with respect to the application of technology within the environments in which health professionals work.

Peer Review reports

Introduction

During the last few years, health professionals have experienced an intense period of technological change. For example, the COVID-19 pandemic prompted a proliferation of virtual care. Post-pandemic shortages of healthcare personnel are further encouraging new uses of technology to improve access to care, and ease the administrative burden on practitioners [1,2,3]. The use of electronic record-keeping and digital scribes continues to grow. Developments in generative artificial intelligence (AI) are altering work practices and raising questions about ethical implementation in a context of limited regulatory guidance and uncertainty [1, 3,4,5,6,7].

Professionals, and the organizations that employ them, may be eager to adopt technological changes to enhance service delivery, but guidance surrounding ethical technology usage has been lagging [6, 8,9,10]. Professional regulatory bodies are charged with governing practitioner conduct, and there have been calls for them to develop policies in this area; [1, 5] however, with limited evidence on the ethical and practice implications of technological change, regulators may be uncertain about how to proceed [5]. The prevailing discourses surrounding AI are highly optimistic and see its application in healthcare as inevitable, while also highlighting risks – a challenging combination for policymakers [11]. Moreover, since technology is often developed by unregulated workers outside healthcare, their products are not subject to the oversight of health profession regulatory bodies. Thus, health profession regulators may be uncertain about their role when regulating technology use in the public interest, or what the public interest is in this context.

The emerging literature on AI has highlighted many ethical considerations. For example, the question of who is responsible for AI outputs, especially in cases where incorrect advice has been given, is still being debated (including in the courts) [12,13,14]. Is the technology at fault? Those who programmed it? The practitioners who used the technology? The organizations mandating its use? Concerns surround confidentiality and privacy in AI use (and other new technologies) [12, 13]. Patients and practitioners have expressed concerns over bias and inaccuracies [15], that could lead to adverse health outcomes, and exacerbate disparities and inequalities in treatment [10]. There are concerns that AI will lead to less patient-centred care, and result in less empathy, trust, and ethical care [10, 14,15,16]. Furthermore, it is not clear whether health practitioners currently have the competencies required to use AI effectively in practice. The lack of transparency surrounding AI algorithms, and limited research on the impacts of applying AI in healthcare settings exacerbate existing concerns [7, 17].

Although the growing literature on AI applications in healthcare has raised these concerns and has begun to research stakeholders’ experiences [7, 10, 15], research has rarely looked at these changes in the context of others, which include the adoption of other technologies and shifts in ways of working (e.g., expansion of virtual practice), as well as workforce shortages, rationalization, funding shortfalls, and other workplace challenges that may alter health practitioners’ ways of working, and create ethical dilemmas for them [18]. Health profession regulators’ codes of ethics, guidance, and standards may help to guide practitioners through this changing practice landscape; however, it is not clear whether existing standards are sufficient for this purpose. New standards have been developed respecting virtual practice in many healthcare professions since the COVID-19 pandemic, and guidance on AI is being developed, but it is still in its early stages of implementation. Whether these adequately address a myriad of concerns (i.e., quality practice, privacy considerations, equity and access to services, practitioners’ technological competence, other ethical considerations) is not clear [19].

To assess the extent to which regulatory codes of ethics and practice guidance are available to inform Ontario health professionals’ use of technology in a context of workplace change, we conducted a mixed-methods study. Our goal was to learn about what challenges healthcare practitioners were facing with technological and workplace change; whether existing codes of ethics and practice were sufficient to help workers navigate these changes, and whether they faced ethical dilemmas that regulator guidance did not address. We sought to hear from practitioners and regulators to understand whether technological changes generate ethical challenges, and how these might be resolved through professional regulation. Through this small-scale study we contribute to debates on the ethical dilemmas experienced by healthcare workers in an era of technological change, and the challenges of regulating health professionals in this context.

Methods

Research context

Health professions are regulated at the provincial level in Canada. In Ontario, one act – the Regulated Health Professions Act – governs regulation of practitioners in the sector [20]. Under the legislation, 26 regulatory bodies (called colleges) have been established to regulate over 30 distinct professions and occupations. The mode of regulation is modified self-regulation subject to government oversight. Regulatory colleges include all practitioners registered to practice in the jurisdiction, and they are governed by a council composed of both practitioners (typically elected by their peers) and public citizens (usually appointed by the government). These bodies are accountable to the state, with oversight from the Ministry of Health, and other bodies like the Office of the Fairness Commissioner which scrutinizes registration practices. All regulatory bodies are expected to issue ethical codes, guidance and standards to guide and govern professional practice.

Research design

Our research was conducted in 2023 and the first half of 2024; it had three phases. First, we conducted a content analysis of regulated Ontario health professions’ codes of ethics and related guidance / standards (N = 63). These documents were searched for and purposively selected from publicly available documents on regulators’ websites. Codes vary in length from a few pages, to over 25. Some regulators have integrated ethical codes into their standards and by-laws (See Appendix for the list of documents analysed). The content analysis was led by BM, a PhD student in sociology, supported by senior scholar TLA, mid-career researcher KL, and early-career researcher SM.

Based on content analysis findings, in our second phase, we chose six regulatory bodies for follow-up, with the goal of interviewing regulators that had approached the writing of codes of ethics differently – in terms of format, length, and attention to technological and workplace concerns -- and to include regulators of professions with distinct working environments (i.e., hospitals, private practice; self-employed and employees). Leaders at these regulatory bodies were emailed directly (using publicly available email addresses), provided information about the study, and invited to participate. Five of the six colleges agreed to an online interview via Zoom; in one instance, three members of the college staff team attended the interview. In total seven regulatory workers were interviewed. All participants were in leadership positions. Interviews were led by the lead author (TLA) a senior and experienced qualitative researcher, supported by one or more members of the research team who asked additional and follow-up questions and took notes. The interview guide was created specifically for this study (see supplementary file). We do not identify participating regulators to ensure participant confidentiality.

In the third phase, we held five online focus groups via Zoom with a total of 17 health professionals, to explore the impact of technological and workplace change on professional practice, and whether existing codes of ethics provided adequate guidance. Any registered health professional actively practising in the province was eligible to participate, although effort was made to recruit practitioners regulated by one of the five regulatory bodies involved in the earlier research phase. Practitioners were recruited through LinkedIn, regulatory bodies, and professional associations. The 17 participants we talked with were from five different professions; including three that were regulated by colleges represented in the Phase 2 interviews. The focus group guide was created specifically for this study (see supplementary file). All participants were promised confidentiality, so their specific professions are not identified. We did not collect demographic information about the participants, although it was clear that they had a range of experience in practice with some having practised for decades, while others had only been in practice for a few years. The focus groups were led by senior researcher and sociologist TLA, supported by PhD student BM. Team members KL and SM attended most focus groups, asked additional questions and took notes.

Interviews lasted approximately 60 min, while focus groups lasted 60 and 90 min. All were audio recorded with participants’ consent, and transcribed verbatim. Only participants and team members were present during the interviews and focus groups. In the interviews and focus groups, participants were told that we were interested in technological change broadly defined, and we asked them to identify technological changes impacting professional practice in their fields. We also asked about virtual practice, social media, and AI. We do not consider these interviews and focus groups exhaustive and acknowledge that the findings are not generalizable. At the same time, the interviews and focus groups yielded considerable insights from practitioners and regulators respecting the challenges of regulating professional practice in a context of technological change.

Data analysis

The first phase of our research involved content analysis of the codes of ethics, guidance, and related publicly available documents retrieved from regulators’ websites and Ontario regulations. Analysis was both inductive and deductive. Inductively, we read through the documents multiple times to identify the values, principles, and areas of practice singled out for attention within the documents. Deductively, we analysed the documents for mentions of technology (both generally, and with respect to specific applications of technology including virtual care, social media, and AI), as well as mentions of work context and work relationships. This information was coded and analysed through a spreadsheet. Attention was paid to what technologies were mentioned in codes of ethics and guidance, and whether guidance was provided about their use.

We analyzed the interview and focus group data iteratively to identify key themes related to ethical challenges and technology use. We reviewed the transcripts and discussed as a team to identify relevant codes that captured these themes. Subsequent rounds of coding were informed by our research questions and research goals: what types of technology were identified by participants as being impactful and in what way? What ethical challenges did practitioners face, and were any of these challenges related to technological changes? Extracts from the different focus groups and interviews were brought together for each theme as it was developed, and findings were refined with the research team throughout the analysis. Quotes and descriptions are used in our findings to support each theme. For this paper, we focus primarily on the ethical concerns practitioners reported in relation to technology and technological change, and how or whether regulatory interventions might help alleviate their concerns.

Data analysis was enhanced by the multidisciplinary backgrounds of research team members who had expertise in professional regulation (TA, KL, SM), political science (SM), health policy (SM), nursing (KL), law (KL), and sociology (TA, BM). Although a co-investigator (KL) was a member of a regulated health profession, neither she, nor other members of the research team, had a relationship with any participant.

Ethical considerations and funding

Approval from the Western University Research Ethics Board (file #123730) was obtained prior to beginning the study. This study was funded by a Regulatory Research Award from the College of Optometrists of Ontario. The funder did not play a role in the development of the study and this manuscript.

Results

Phase 1: Health profession codes of ethics

Generally, Ontario health profession codes of ethics emphasize responsibilities to clients/patients and the need to maintain competence and conform to standards of practice. All colleges had guidance or ethical codes touching on these issues, as Table 1 shows. Also very common, but not ubiquitous, were guidelines/codes respecting relationships with colleagues and coworkers (22/26). Most codes of ethics also emphasized general principles, which included beneficence, respect, trustworthiness, accountability, and fairness. Despite the emphasis within codes of ethics on relationships with clients and coworkers, very few codes address relationships between professionals and their employers (or their staff) (7/26). Guidelines for employed professionals were quite variable with some reminding practitioners that they should abide by their employers’ policies, while a few advised their registrants that they have a responsibility to encourage their employers to establish rules consistent with those of the regulatory colleges.

The presence of guidance or codes of ethics addressing technology use is variable, and some dimensions have received more attention than others. As of mid-2024, most health professional regulators had policies or ethical codes respecting social media use (20/26) and virtual practice (17/26); however, only a fifth of regulators (5/26) had standards/codes exhorting professionals to be competent in their use of technology, and to use it to support their knowledge and skills – not as a replacement (see Table 1). Statements respecting AI use were also rare (6/26), but they were increasing in prevalence; most policies that explicitly mentioned AI were published in 2024.

It is important to note that ethical codes and guidelines differ significantly in comprehensiveness, with some just mentioning an issue briefly, while others provide detailed guidelines and instructions for their registrants as well as case scenarios.

Table 1 Characteristics of codes of ethics / guidance / standards of Ontario’s 26 health and profession regulatory colleges

Phase 2: Regulator interviews

In our interviews with regulatory leaders and staff, participants frequently mentioned the challenges of keeping up with technological changes. One regulator captured the sentiments of some others, with respect to the pace of change:

Things are changing – moving very fast. You know, we might think that we see things on the horizon, and then all of a sudden, they’re before us, right? Like … AI, we might think that’s something that our next generation is going to have to worry about, and now it’s like, oh no, we’re actually going to have to come up with something. This is actually happening. So, I think that the timelines are shorter…. We can’t rely on longer processes, we need to make speedy decisions to be relevant and to make sure that the proper guidance is out there. (R3)

This participant’s phrasing, “come up with something” is indicative of the uncertainty the regulators had about what exactly new policies should look like. It can take a while to assess risks, gather evidence, and develop potential policies. Regulators were also uncertain in some cases about whether or when they should establish standards, guidance, and/or codes of ethics about technology use. In several cases, we were told that their first step was to send out a communication advising careful use of new technology or awareness of potential privacy considerations.

As demonstrated by our content analysis findings, all health profession regulators in Ontario have codes and standards emphasizing public protection and competency. As regulated health professionals, practitioners are responsible for maintaining their competence and protecting their patients and the public from harm. These responsibilities remain in the face of technological developments. One might wonder, then, if guidance or codes specific to technology use are necessary.

Most regulators have produced ethical codes and guidance respecting social media use and virtual practice; however, the development of these policies has been, for the most part, reactive. Problems emerged and regulators developed standards or guidance in response. For example, one regulator we spoke to explained that their college had developed guidance around social media use in response to problems, but as issues persisted, they were thinking of moving policy to the level of practice standards. Their policies were informed by those of other colleges. It was harder to be proactive and regulate before problems occurred. For example, one participant explained that they had begun to explore international practices in AI regulation, but at the time of our interview were not yet ready to formulate a policy; more “deep thinking” (R3) was required.

Practitioner questions and concerns also shaped regulators’ policies. Professionals grappling with ethical dilemmas, or who were uncertain about what their responsibilities were with respect to technology use, might call their regulatory college for advice (or to highlight a colleague’s negative behaviour). Many regulators had staff members working as practice advisors whose job was to answer practitioner questions and support them to work out the answers:

A lot of [practitioners] would call us for practice advice with ethical issues. And our role as a practice advisor was not to tell them what to do, but just kind of give them the tools necessary to help them make their own decision. (R1)

As we have seen, regulators do not always have clearly defined policies respecting technology use, and many codes of ethics identify general principles rather than concrete advice (which is more common in standards or guidance).

Regulators were aware that their mandate prioritized public protection and the public interest. Providing support to members appeared to be a ‘member service’ and hence, some asserted, something that professional associations should provide – not regulatory bodies. In Ontario, health profession regulatory bodies and associations are separate organizations with different mandates; the former regulate in the public interest, while the latter represent professionals and their interests. Providing guidance about ethical behaviour and good practice is an area of potential overlap between these organizations. From a regulator’s point of view, answering practitioners’ questions about ethical conduct could be said to serve the public interest by preventing harm; however, too much support might suggest that the regulator was focused on supporting practitioners, which could be perceived as a violation of their role.

We have advised [practitioners] about the importance of cybersecurity, because there’s a need to protect patient health information…. So we have done that. Now, we’re also cognizant of our role as a regulator as opposed to being a member services organization like the association. We believe that’s their role … (R2).

Since professional regulators handle complaints, investigations, and discipline against practitioners, our interviewees further explained that practitioners might reasonably be reluctant to contact them for support.

[T]he college is not a place that health professionals really want to expose their vulnerabilities to out of fear of, you know, certain consequences. So I would say that the association would probably have a much better sense of those complex issues that they’re bringing to them to help them figure out. (R3)

Overall, whether because of their uncertainty about the ethical implications of technology use, or their reluctance to provide too much support to practitioners facing dilemmas, regulators were not quick to implement policy in this area.

A further complication is the fact many health professionals are employed by others, and employers might also make demands regarding technology use – demands that could run counter to the college’s own requirements. Such instances include respecting practices in electronic record-keeping (R1, R5), doing work for which practitioners lacked competency (R4), or “cutting corners … if an employer doesn’t feel that something needs to be done even though the college requires it” (R1). Regulators were aware of such conflicts; however, they viewed them as outside their purview.

And we just have to keep saying, ‘Look, we don’t regulate the practice owner. We don’t regulate businesses or organizations. We regulate you [the practitioner], which means that you’re on the hook to make things right.’ (R5).

Consistent with the advice provided in Table 1, practitioners could be encouraged to convince their employers to change their practices.

Phase 3: Focus groups with practitioners

The healthcare practitioners who spoke with us had much to say about technological change, ethical challenges, and regulator guidance. Most practitioners were excited about recent and future technological developments that held promise to enhance practice. Several participants told us they had consulted regulators’ guidance and codes of ethics surrounding technology use. A few had called their college for advice about ethical dilemmas related to technology use. Our participants were divided about the usefulness of college guidance and codes of ethics. Some appreciated the emphasis on general principles in codes of ethics.

I do appreciate how loose some of the guidelines may be. And I think it’s really an invitation for one’s clinical judgment. Because the cost of being too rigid, and strict with guidelines is that I don’t think the college has the capacity to capture everything, all types of different scenarios. (FG5)

Applying these principles to concrete situations could be difficult, however:

You know, you can look at a code of ethics for like…guiding principles, right? And you can look at those, but if nobody’s taught you how to think critically… (FG1).

One participant warned against too prescriptive guidelines: “you also don’t want the regulators to have such a heavy hand that they stifle the innovation and the development of AI technology” (FG3). At the same time, this participant agreed with others that “there needs to be more guidance and direction from regulators”:

I think that the standards have been beefed up since probably 5–7 years ago, because of all of the shifts recently. Do I think that they’re sufficient? I do not. (FG5)

When those sorts of situations come up, and I look at the standards, it’s not sufficient … to support good practice, because those are the moments where you’re relying on the standards for some support, and it just kind of falls short. (FG5)

Participants identified a few factors contributing to the challenges in this area. First, as new technologies develop it is difficult for regulators to determine what risks might accompany their implementation. As we have seen, regulation is often reactive:

It’s always one or two steps behind playing catch up … Meanwhile, the profession’s already embraced AI or already embraced, you know, digital records, and it’s like, oh, now that’s a problem. Now, we have to set the rules. (FG1)

It’s like they’re building the plane as they’re flying it. (FG5)

As these participants indicate, in the midst of technological change, formulating policy can be a challenge. In response, regulators might issue guidance instead, but this could be so vague as to be unhelpful:

There was one communiqué [about AI], and I say one, and it said, “Use with caution.” (FG5).

This participant was left with many questions about incorporating AI into their own practice.

Second, some practitioners found regulatory guidance to be of limited utility when faced with employer demands that ran counter to their codes of ethics. Some private-sector employers threatened dismissal if professional employees did not follow their rules.

I’m trying to hold up my ethics, with what I know is right, as a …professional, but to have somebody that’s essentially the [office] accountant looking down at me and saying, we’re not going to be able to pay you this month, if you don’t do what we say. How am I supposed to upkeep what I know is right? (FG2)

They only regulate the registrants, so they don’t really have jurisdiction in terms of what a private company may do… (FG3).

While regulators encouraged practitioners to inform employers about their ethical responsibilities and encourage them to develop policies consistent with them (as noted above), participants did not see this as a realistic option:

You can look to the code of ethics and go, ‘Okay, wait a minute. I am right’ … But now, what are you going to do? Are you going to take that to your boss … like, ‘here’s my code of ethics. This is what I’m supposed to be doing’ …. They’re going to look at you like you’ve got two heads. (FG2)

Participants in these situations questioned whether regulator policies and codes of ethics respecting technology were even relevant:

I just don’t think changing the code of ethics is going to move mountains. Like I think there will be no – it won’t move the needle at all. (FG2).

Finally, there was a sense among some of our participants that regulator policies were out of touch with the complexity of the technologies and their use. For instance, regulators’ social media policies typically address the blurring of professional-patient relationships on social media, and encourage professionals to separate their professional and personal social media accounts. However, these policies do not always account for the complex ways in which social media is being used:

[P]rofessionals, you know, might have like a professional Instagram or a professional TikTok, where they do… you know, these videos or reels to educate patients. But a lot of times patients also try and reach out in direct messages. And they might be asking, you know, very in-depth questions that might be even personal or borderline leading to like an assessment. So I think there needs to be some better guidelines on the use of social media. (FG3)

Use of AI in practice was another area of concern:

This technology is so new and so untested, that it becomes one that we have to be extremely careful with it. Because I mean, anyone that’s used ChatGPT, or any one of these things, it’s a fascinating kind of technology…. But at the end of the day, on the other end, there’s a patient there that we can’t really risk having them be the ones that get messed up. So I think there are very easy regulations that can be put in, such as making sure that the [professional] is the one making the final call. But I think as AI becomes better and better, this is going to become a much more complicated issue in terms of what regulations need to be put in. What guidelines need to be put in. (FG3)

Many questions remained for most of our study participants about how technological change could be addressed effectively through regulation.

Discussion

Calls for enhanced regulation around AI use in health professional practice are growing, with some commentators calling for co-ordinated efforts implemented quickly, before AI use becomes more widespread [1, 14]. However, the issues surrounding AI’s implementation are complex, and co-ordination, proactivity and fast action in this domain are challenging since identifying risks and responsibilities is difficult. The technological changes facing health professionals in Ontario extend well beyond AI, however, and include virtual care, social media use, questions of technological competency, and other developments. The confluence of these changes and concerns – that impact not only professional workers, but also the work of regulators [29, 30] – has created challenges for regulators and practitioners alike.

Our mixed methods research study examined whether codes of ethics provide adequate guidance for regulated health professionals navigating technological change in the province of Ontario, Canada. The content analysis of health profession codes of ethics revealed that guidelines respecting social media use and virtual practice were common, but few colleges had guidance respecting technological competency or AI in practice (the latter was increasing). Regulators’ codes of ethics typically emphasized general principles, relationships with patients/clients, and competency. Regulators expressed uncertainty about regulating professionals’ use of technology, and practitioners questioned how useful regulators’ codes and guidance were since they tended to be general and removed from practice contexts. Regulators held that they were not well-placed to support practitioners grappling with technological change since their primary mission is public protection. Moreover, regulators have no control over practitioners’ practice environments, and hence their ability to influence how technology is used in practice is delimited. In some instances, practitioners experience conflict between the expectations of their regulator and the expectations of their employers. The result is a gap in guidance and a lack of clarity around ethical practice, leaving practitioners with some uncertainty about how to practice ethically in a digital age.

Regulators (and practitioners) identified several challenges they grappled with when considering the ethical implications of technological change. First, professional regulation is fundamentally reactive: regulators respond when risks are identified and problems arise. Because it is difficult to anticipate risks and ethical challenges associated with technology use, regulators end up creating policy ‘on the fly’. Second, there was some ambiguity around regulators’ responsibilities: while helping practitioners navigate ethical challenges associated with technology use and accompanying workplace changes might mitigate future harm to clients, several of our interviewees felt that actively advising practitioners was outside their mandate.

Third, professional regulation in Ontario focuses on ensuring public safety through overseeing practitioner competence and conduct. Professional regulation dates from the nineteenth century, when most regulated professionals were self-employed. Provincial professional regulators in the health sphere have little-to-no governance authority over professionals’ workplaces. At least some professional employees told us that their employers made it difficult to meet college expectations for good, ethical practice [29]. Fourth, there is so much optimism surrounding AI (and other technologies) that at least some regulators and practitioners appear concerned about implementing policies that might deter innovation or progress, or that might be too proscriptive. However, there is a risk that a lack of regulation in this area might exacerbate ethical challenges for professional practitioners who are uncertain what their ethical obligations are in the context of new technologies. Finally, regulatory bodies emphasize practitioner competence, but few have seriously considered whether they need to establish regulations around competence in technology usage. Some regulators explicitly advise practitioners about their responsibility to be competent in the use of technologies in practice; however, in related research we have found that other regulators assert that ‘competence is competence’ and there is no need for specific guidelines around technological competency [30].

Despite these challenges and areas of uncertainty, there is a clear need for enhanced regulation in this area to promote ethical use of technologies in professional practice. As others have argued, policy development in this area will require more discussion and collaboration between regulators, practitioners, and other stakeholders including government, technology developers and employers [1, 10]. Ethical guidance may need to account for the fact that practitioners’ technology use is not always up to their discretion. Employers and employing organizations may also require regulation to ensure ethical use of AI and other technologies. The application of technology cannot be left up to technology companies and employers, since they are not typically held to the same ethical commitments and public interest obligations as regulated health professionals.

Study limitations and future research

Our study was small in scope. While the content analysis reviewed all available documents from all health profession regulators in the province, interviews were only conducted with seven staff members at five regulatory bodies and focus groups with seventeen practitioners spread across five health profession regulatory bodies. As such, our findings are not generalizable to all regulators and health professionals in the province of Ontario, let alone outside this jurisdiction. Nonetheless, this exploratory research does illuminate challenges with regulating ethical practice in a context of technological change that appear to be impacting people in other jurisdictions as well [1, 15]. These challenges are worth exploring in more detail in future research with larger samples.

Furthermore, our sample of regulators was purposive: we sought to interview a cross-section of regulators who had different policies respecting the use of technology, and who regulated workers in a variety of practice settings. Unfortunately, we were unable to recruit practitioners from all the professions regulated by the regulatory bodies represented in the interview phase of the project. Furthermore, we only had the capacity to hold a limited number of focus groups, and hence successfully recruited and included only 17 health practitioners. There is no reason to believe that our participants in either the focus groups or interview phases were representative of their professions, or other health professionals. It is likely that there was a selection effect among focus group participants whereby those who had more experience with, or interest in, ethical dilemmas volunteered. This was helpful to us, though, giving us insight into the concerns of practitioners who had given some thought to the issues we were exploring.

Future research should involve a wider array of practitioners and regulators, across regions, to explore the ethical impact of technological change in more depth. Furthermore, this area is shifting with technological changes occurring rapidly and regulatory guidance expanding accordingly. A follow-up study to ascertain whether new guidelines respecting AI are helpful in regulating in this area is warranted. Previous research on technological change (and especially AI implementation) has not highlighted the role of organization and employment context; however, this is an area requiring further exploration. The contexts in which technology is introduced and how it is applied could impact ethical challenges associated with its use.

Conclusion

Increasingly, researchers have explored the risks and ethical considerations associated with the introduction of AI into healthcare. Some of these risks can be mitigated by regulation of health professional practice; however, regulators have been slow to implement ethical guidance concerning the use of AI. Our exploratory research has found that in Ontario, Canada, there is limited regulatory guidance and codes of ethics related to the use of technology in practice (with the exception of recent guidance on virtual care and social media). Regulators find it difficult to identify risks, and are, at times, uncertain about how to proceed. Practitioners find that current guidance is frequently insufficient to support them as they navigate ethical dilemmas stemming from the changing nature of practice. This research has highlighted the lack of, and need for, more regulation to promote ethical practice in a digital age. It has also identified some limitations with current modes of professional regulation, with ethical implications, and therefore provides support for arguments that regulating technological change like AI requires a multi-pronged approach that extends beyond the practitioners that use it, but also those that develop and apply it. Such an approach appears necessary if we are to overcome the many risks associated with AI and other technologies and promote ethical healthcare practice that serves the public interest.

Data availability

Raw interview data cannot be made publicly available for ethical reasons because they may contain identifiable or sensitive information. Excerpts of redacted interview and focus group transcripts may be made available from the corresponding author on reasonable request. Documents analysed are publicly available.

References

  1. Smith H, Downer J, Ives J. Clinicians and AI uses: where is the professional guidance? J Med Ethics. 2024;50:437–41.

    Article  Google Scholar 

  2. Agarwal R, Dugas M, Gao G. Augmenting physicians with artificial intelligence to transform healthcare: challenges and opportunities. J Econ Manage Strat. 2023;33:360–74. https://doiorg.publicaciones.saludcastillayleon.es/10.1111/jems.12555.

    Article  Google Scholar 

  3. Meskó B, Hetényi G, Györffy Z. Will artificial intelligence solve the human resource crisis in healthcare? BMC Health Serv Res. 2018;18:545. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12913-018-3359-4.

    Article  Google Scholar 

  4. Alowais SA, Alghamdi SS, Alsuhebany N, Alqahtani T, Alshaya AI, Almohareb SN, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. 2023;23:689. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12909-023-04698-z.

    Article  Google Scholar 

  5. Professional Standards Authority. Response to the Government consultation on the White Paper: a pro-innovation approach to AI Regulation. 2023. https://www.professionalstandards.org.uk/docs/default-source/publications/consultation- response/others-consultations/2023/psa-response-to-government-consultation-on-white-paper-on-artificial-intelligence-regulation.pdf?sfvrsn=e4a14a20_3

  6. Meskó B, Topol EJ. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ Digit Med. 2023;6(1):120–6.

    Article  Google Scholar 

  7. Arbelaez Ossa L, Lorenzini G, Milford SR, et al. Integrating ethics in AI development: a qualitative study. BMC Med Ethics. 2024;25:10. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12910-023-01000-0.

    Article  Google Scholar 

  8. Conseil interprofessionel du Quebec (CIQ). Presentation Sommaire de l’encadrement Actuel de l’intelligence Artificielle - Document de Réflexion. 2021. https://conseil-interprofessionnel-du-quebec.s1.yapla.com/fr/lire-une-actualite/presentation-sommaire-de-lencadrement-actuel-de-lintelligence-artificielle---document-de-reflexion.

  9. Haupt CE. Governing A.I.’s professional advice. McGill Law Journal. 2019;64(4). https://lawjournal.mcgill.ca/article/governing-a-i-s-professional-advice/. Accessed Sept 30 2024.

  10. Hasan HE, Jaber D, Khabour OF, et al. Ethical considerations and concerns in the implementation of AI in pharmacy practice: a cross-sectional study. BMC Med Ethics. 2024;25:55. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12910-024-01062-8.

    Article  Google Scholar 

  11. Arbalaez Ossa L, Milford SR, Rost M, Leist AK, Shaw DM, Elger BS. AI through ethical lenses: a discourse analysis of guidelines for AI in Healthcare. Sci Eng Ethics. 2024;30:24. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s11948-024-00486-0.

    Article  Google Scholar 

  12. Proctor J. Air Canada found liable for chatbot’s bad advice on plane tickets. CBC News, Feb 15 2024; https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416

  13. McKendrick J, Thurai A. AI Isn’t Ready to Make Unsupervised Decisions. Harvard Business Review, Sept 15 2022: https://hbr.org/2022/09/ai-isnt-ready-to-make-unsupervised-decisions.

  14. Cestonaro C, Delicati A, Marcante B, Caenazzo L, Tozzo P. Defining medical liability when artificial intelligence is applied on diagnostic algorithms: a systematic review. Front Med (Lausanne). 2023;10:1305756. https://doiorg.publicaciones.saludcastillayleon.es/10.3389/fmed.2023.1305756.

    Article  Google Scholar 

  15. Witkowski K, Okhai R, Neely SR. Public perceptions of artificial intelligence in healthcare: ethical concerns and opportunities for patient-centered care. BMC Med Ethics. 2024;25:74. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12910-024-01066-4.

    Article  Google Scholar 

  16. Sauerbrei A, Kerasidou A, Lucivero F, et al. The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions. BMC Med Inf Decis Mak. 2023;23:73. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12911-023-02162-y.

    Article  Google Scholar 

  17. Aldhafeeri FM. Navigating the ethical landscape of artificial intelligence in radiography: a cross-sectional study of radiographers’ perspectives. BMC Med Ethics. 2024;25:52. https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12910-024-01052-w.

    Article  Google Scholar 

  18. Adams TL, Sawchuk PH. Ethical dilemmas and Workplace Change: nurses and engineers. In: Livingstone DW, editor. Professional Power and Skill Use in the Knowledge Economy. Leiden: Brill; 2021.

  19. Adams TL, Leslie K. Regulating for-profit virtual care in Canada: implications for medical profession regulators and policy-makers. Healthc Manage Forum. 2023;36(2):113–8. https://doiorg.publicaciones.saludcastillayleon.es/10.1177/08404704221134872.

    Article  Google Scholar 

  20. Regulated Health Professions Act, Ontario Statutes 1991.

  21. Ontario College of Pharmacists, Code of Ethics, https://www.ocpinfo.com/wp-content/uploads/documents/CodeofEthics_final.pdf

  22. College of Physicians and Surgeons of Ontario, The Practice Guide, https://www.cpso.on.ca/Physicians/Policies-Guidance/Practice-Guide. (p. 9).

  23. College of Physiotherapists of Ontario, Code of Ethics, https://www.collegept.org/rules-and-resources/ethics, p.1.

  24. College of Optometrists of Ontario, Ontario Regulation 859/93 Optometry Act (Professional Misconduct), https://www.ontario.ca/laws/regulation/930859#:~:text=Ordering%20a%20person%20who%20is,not%20consistent%20with%20the%20regulations.

  25. Royal College of Dental Surgeons of Ontario, Practice Advisory. Professional Use of Social Media, https://az184419.vo.msecnd.net/rcdso/pdf/practice-advisories/RCDSO_Practice%20Advisory_Professional_Use_of_Social_Media_.pdf

  26. College of Physicians and Surgeons of Ontario. Advice to the Profession on Virtual Care. https://www.cpso.on.ca/en/Physicians/Policies-Guidance/Policies/Virtual-Care/Advice-to-the-Profession-Virtual-Care

  27. College of Psychologists and Behaviour Analysts of Ontario, Standards of Professional Conduct, https://cpbao.ca/members/professional-practice/standards-of-professional-conduct/(p. 28).

  28. College of Dieticians, Code of Ethics. https://www.collegeofdietitians.org/employers/standards-and-ethics.aspx, p. 3.

  29. Leslie K, Myles S, Alraja AA, et al. Professional regulation in the digital era: a qualitative case study of three professions in Ontario, Canada. PLoS ONE. 2024;19(5): e0303192. https://doiorg.publicaciones.saludcastillayleon.es/10.1371/journal.pone.0303192.

    Article  Google Scholar 

  30. Public Interest Regulation in a Digital Age: Research Study Findings. Report for the Canadian Network of Agencies for Regulation (CNAR). 2024.

Download references

Acknowledgements

The authors wish to thank the participants for their valuable contributions to the study, and the College of Optometrists of Ontario for their funding support.

Clinical trial number

Not Applicable.

Funding

This project was funded by a Regulatory Research Award from the College of Optometrists of Ontario. The funder did not play a role in the development of the study or this manuscript.

Author information

Authors and Affiliations

Authors

Contributions

TLA conceptualized the study and led the funding acquisition and original manuscript preparation. BM led the content analysis. All authors contributed to data collection and analysis. All authors reviewed and revised the final manuscript.

Corresponding author

Correspondence to Tracey L. Adams.

Ethics declarations

Ethics approval and consent to participate

Approval from the Western University Non-Medical Research Ethics Board (file #123730) was obtained prior to beginning the study. All focus group and interview participants provided informed consent to participate.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Adams, T.L., Leslie, K., Myles, S. et al. Regulating professional ethics in a context of technological change. BMC Med Ethics 25, 143 (2024). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12910-024-01140-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12910-024-01140-x

Keywords