Proactive versus passive algorithmic ethics practices in healthcare
As AI algorithms in assisting medical diagnosis and treatment become increasingly prevalent in healthcare, practitioners face a critical choice in how to manage the ethical challenges that arise because algorithms are not ethically neutral [66]. They can adopt either proactive or passive algorithmic ethics practices, which differ in the level of effort and commitment to addressing issues such as fairness, accountability, and transparency. Proactive algorithmic ethics practices involve taking deliberate steps to anticipate and mitigate potential ethical risks before they cause harm. This may include establishing AI ethics committees, conducting regular algorithmic audits, providing clear explanations for AI-generated recommendations, and actively seeking patient feedback [9, 43]. For example, proactive healthcare providers might implement transparency measures such as algorithmic impact assessments, which seek to raise awareness and improve dialogue over potential harms of machine learning algorithms [57]. In contrast, passive algorithmic ethics practices are characterized by a lack of such dedicated efforts, and are more reactive and focused on meeting minimum compliance requirements [43]. They may deploy AI systems without fully assessing their impact on different patient populations or engaging in meaningful transparency [53]. The distinction between proactive and passive algorithmic ethics practices is particularly salient in healthcare, where algorithmic decisions can have significant impacts on patient outcomes [30].
Previous studies have empirically demonstrated the importance of ethical considerations in shaping user attitudes and behaviors toward algorithmic systems. For instance, Lepri et al. (2018) found that transparency and fairness in algorithmic decision-making significantly increased user trust and acceptance [36]. Similarly, Shin (2021) demonstrated that algorithmic transparency practices positively influenced users’ perceptions of AI systems’ ethicality and their subsequent willingness to adopt these technologies [59]. In the healthcare context specifically, Asan et al. (2020) showed that ethical aspects such as explainability and consistency in AI systems were key determinants of clinicians’ trust in and adoption of medical AI tools [3]. This underscores the importance of healthcare providers taking a proactive stance in creating an environment conducive to responsible AI use. However, there is limited evidence directly comparing the effects of proactive and passive AI ethics practices on patient responses in the healthcare context. While previous research suggests that proactive ethic practices may lead to more favorable outcomes (e.g., Morley et al., 2020) [43], empirical investigation is needed to validate this assertion and explore the underlying mechanisms. Therefore, our study aims to address this gap by examining how proactive and passive algorithmic ethics practices influence patients’ attitudes towards healthcare providers, their trust in these providers, and their intentions to use AI-enabled healthcare services. By shedding light on these relationships, we seek to provide actionable insights for healthcare organizations navigating the ethical challenges of AI adoption.
Effects of passive versus proactive algorithmic ethic practice
Previous literature on organizational ethics suggests that an organization’s ethical practices significantly influence stakeholder attitudes and behaviors [74]. When organizations proactively adopt positive ethical practices, they send positive signals to stakeholders, indicating that the organization values ethics and responsibility [68]. These practices shape the organization’s positive ethical image and enhancing stakeholder favorability and engagement [35, 40, 60]. Besides, this positive ethical image, in turn, influences stakeholders’ perceptions of the organization’s characteristics, which forms the basis for the identification process. C-C identification theory further enriches this perspective by positing that consumers’ identification with a company depends on the perceived congruence between their self-concept and the company’s characteristics [5, 27]. When consumers perceive a strong alignment between their own values and the organization’s ethical practices, they are more likely to feel a sense of connection and oneness with the organization [18]. This means that consumers start to define themselves in terms of the organization’s characteristics, believing that their identity is closely tied to or overlaps with the identity of the organization. Integrating these two perspectives, we can propose that organizations’ proactive adoption of ethical practices not only sends positive signals and shapes a positive ethical image but also enhances stakeholders’ identification with the organization by increasing the perceived congruence between their self-concept and the organization’s characteristics.
Consistent with this reasoning, providers that actively engage in establishing and implementing algorithmic ethics guidelines, such as ensuring fairness, accountability, and transparency in AI decision-making processes, send a strong signal to patients that they prioritize ethical considerations in their use of AI technologies on one hand. On the other hand, these proactive practices contribute to building a positive ethical image of the healthcare provider, which resonates with patients who value responsible and trustworthy healthcare services. Patient attitudes, which encompass their overall evaluations, feelings, and behavioral tendencies towards healthcare providers [1, 8], are likely to be more favorable when patients perceive a strong alignment between their own values and the provider’s ethical practices. In other words, patients are more likely to feel a strong sense of identification with these providers. This identification, in turn, manifests in more favorable patient attitudes, such as increased satisfaction, and loyalty towards the healthcare provider.
Furthermore, organizational ethical practices are also crucial for building stakeholder trust [52, 73], and this trust-building process can be enhanced by fostering strong consumer-company identification. Trust serves as a fundamental belief that patients hold in their healthcare providers, rooted in the confidence that these professionals will consistently act in the patients’ best interests, maintain the utmost confidentiality with sensitive information, and provide accurate, reliable, and timely guidance [1, 3]. When healthcare providers proactively adopt and implement AI ethics practices, they demonstrate their commitment to ethical principles and values, which serves as tangible evidence of their qualities such as honesty, respect, accountability, and integrity [61]. These proactive ethical practices align with patients’ values and expectations, enhancing their identification with the healthcare provider. This identification process strengthens the trust-building mechanisms, as patients who strongly identify with a healthcare provider are more likely to perceive the provider as trustworthy and reliable [32, 51]. This proactive approach goes beyond mere compliance with regulations and showcases a genuine commitment to ethical behavior, thus fostering patient trust.
In addition to trust, organizational ethical practices shape stakeholders’ behavioral intentions [67]. Previous research has indicated that ethically-oriented corporate social responsibility activities can increase consumer purchase intentions [4, 41, 62]. In the context of healthcare, patients’ behavioral intentions may include their willingness to engage with and adhere to the provider’s AI-based services and recommendations. When healthcare providers proactively address ethical concerns related to AI, such as ensuring transparency in AI decision-making, and mitigating potential biases, they are more likely to perceive the provider’s values and actions as aligned with their own, thus increasing their sense of identification [18]. This identification can enhance patients’ perceptions of the provider’s reliability and competence, as they feel a stronger connection and shared values with the provider. Consequently, patients are more likely to embrace AI-driven healthcare solutions readily, leading to increased acceptance and willingness to use AI services.
In contrast, passive practices, where healthcare providers merely react to ethical issues or comply with minimum standards, may not evoke the same level of positive identification and attitudinal responses from patients. This lack of proactive engagement may hinder the development of consumer-company identification, as patients may not perceive a strong congruence between their own values and the provider’s practices [5, 18]. Moreover, passive practices may raise doubts about the provider’s genuine commitment to ethics and trustworthiness [68], as patients may question the provider’s integrity and benevolence, which are essential components of trust [48]. Consequently, when patients do not perceive a strong alignment between their own values and the provider’s ethical practices, they may be less likely to embrace and trust AI-based services and recommendations. This lack of identification and trust can lead to increased skepticism and reluctance to adopt AI-driven healthcare solutions, as patients may doubt the provider’s ability to address ethical concerns effectively. Based on the literature reviewed above, we develop and test the following hypotheses (H1-H3), which will be systematically examined in our analyses and discussed in our results section. Therefore, we thus argue that:
H1: Proactive algorithmic ethics practices will lead to more positive patient attitudes towards the healthcare provider compared to passive practices.
H2: Proactive algorithmic ethics practices will lead to higher levels of patient trust in the healthcare provider compared to passive practices.
H3: Proactive algorithmic ethics practices will lead to stronger patient intentions to use AI-enabled healthcare services compared to passive practices.
The moderating role of healthcare engagement type
Healthcare engagement type refers to the nature of healthcare service that patients choose to engage in, based on their prioritization of privacy protection versus health utility. This construct recognizes the active role patients play in their healthcare decisions and reflects the varying degrees of privacy concerns and outcome expectations across different medical scenarios. In specific, privacy-focused healthcare engagements are characterized by a higher priority placed on privacy protection by patients. These typically include services such as mental health consultations, sexual health services, or genetic testing [70], where patients are more concerned about the confidentiality and security of their sensitive personal information. In contrast, utility-focused healthcare engagements are those where patients prioritize treatment effectiveness and health outcomes over privacy concerns. Examples include chronic disease management, physical rehabilitation, or emergency care [25, 39]. In these scenarios, patients are generally more willing to share personal information if it leads to better-personalized care and improved health outcomes.
We argue that healthcare engagement type can serve as a boundary condition for the effectiveness of healthcare providers’ algorithmic ethics practices in shaping patient responses. When patients engage in privacy-focused healthcare services, they tend to be highly sensitive to potential privacy risks and place a greater value on robust data protection measures that safeguard the confidentiality and security of their personal information. In these situations, patients are more likely to scrutinize how their data is being collected, used, and shared by healthcare providers, and they may be more hesitant to disclose sensitive information if they perceive any potential threats to their privacy. In this context, proactive algorithmic ethics practices can be particularly effective in sending strong positive signals about the healthcare provider’s commitment to data transparency, fairness, and privacy protection. Moreover, by implementing proactive ethics practices healthcare providers can demonstrate that their data practices are aligned with patients’ values and expectations regarding privacy. This alignment is especially important for patients participating in privacy-sensitive healthcare services, as they may be more attentive to these signals compared to those engaging in utility-focused services. This notion is supported by Ploug and Holm (2020), who emphasize the importance of respecting patient privacy preferences in enhancing trust in AI-based diagnostics, particularly for privacy-sensitive individuals [53]. Similarly, Esmaeilzadeh (2020) finds that privacy concerns significantly moderate the effect of perceived benefits on attitudes toward AI-based tools in healthcare [20], suggesting that patients with higher privacy concerns are more likely to appreciate and respond positively to proactive data protection measures.
In contrast, as patients’ healthcare engagement shifts towards being more utility-focused, such as in cases of chronic disease management or emergency care, the impact of proactive algorithmic ethics practices may be less pronounced. In these scenarios, patients are primarily driven by the potential health benefits and outcomes that AI technologies can offer, such as improved diagnostic accuracy, personalized treatment plans, and enhanced care efficiency. As a result, they may be more willing to share their personal health information and less sensitive to the specific data handling practices by the healthcare provider, as long as they perceive that the AI system can deliver tangible improvements to their health and well-being. For these utility-focused healthcare engagements, the positive signals and perceived value alignment generated by proactive AI ethics practices may be less salient or impactful compared to privacy-focused engagements. While patients are still likely to appreciate and value the healthcare provider’s commitment to ethical AI practices, their primary focus remains on the anticipated health benefits and outcomes. In other words, the perceived utility of the AI system in improving their health may outweigh or partially offset any concerns about data privacy or ethical risks, making the marginal impact of proactive algorithmic ethics practices less pronounced. This notion is supported by Xu et al. (2022), who find that the impact of personalization on individuals’ willingness to disclose personal information is moderated by their privacy calculus [72]. Specifically, the effect of personalization on disclosure intentions is weaker for individuals who perceive greater benefits relative to risks. This suggests that when patients perceive significant health benefits from engaging with an AI system, they may be more willing to trade off some level of privacy for the expected utility gains, thereby reducing the relative importance of proactive algorithmic ethics practices in shaping their attitudes and behaviors. Based on the literature reviewed above, we develop and test the following hypotheses (H4-H6), which will be systematically examined in our analyses and discussed in our results section. Therefore, we thus argue that:
H4: There will be an interaction effect between algorithmic ethics practices and healthcare engagement type on patient attitudes towards the healthcare provider, such that: (a) For privacy-focused healthcare engagements, proactive ethics practices will lead to significantly more positive attitudes compared to passive practices. (b) As the healthcare engagement type shifts towards being more utility-prioritized, the positive effect of proactive algorithmic ethics practices on attitudes will diminish, demonstrating a decreasing marginal effectiveness of ethical practices.
H5: There will be an interaction effect between algorithmic ethics practices and healthcare engagement type on patient trust in the healthcare provider, such that: (a) For privacy-focused healthcare engagements, proactive algorithmic ethics practices will lead to significantly high levels of trust in healthcare providers compared to passive practices. (b) As the healthcare engagement type shifts towards being more utility-focused, the positive effect of proactive algorithmic ethics practices on patient trust will diminish, demonstrating a decreasing marginal effectiveness of ethical practices.
H6: There will be an interaction effect between algorithmic ethics practices and healthcare engagement type on patient intentions to use AI-enabled healthcare services, such that: (a) For privacy-focused healthcare engagements, proactive algorithmic ethics practices will lead to significantly stronger intentions to use AI-enabled services from healthcare providers compared to passive practices. (b) As the healthcare engagement type shifts towards being more utility-focused, the positive effect of proactive algorithmic ethics practices on patient intentions to use AI-enabled services will diminish, indicating decreasing marginal effectiveness of ethical practices.
link
