Search
  • Aiforgood Contributor

Moral Responsibility in Tech

Do companies have the Moral Obligation to Ensure AI is Used for the Betterment of Society?


The companies we have come to know and love don't just represent the items in our homes or the brand of coffee in our cups. Indeed they represent something else, because a company's values are imbued in the products they sell. That Nike shirt you own represents “bringing inspiration and innovation to every athlete in the world.” That coffee you had this morning at Starbucks may give you a feeling of “warmth and belonging”. Living in a world of expanding powers of technology, the values imbued in the technology carry a particularly considerable weight and with that a responsibility. Companies have a responsibility to act as moral guides when developing and implementing technology due to the lack of external governance and the closed nature of the technology design process. For better or worse, it's their decision to make. Therefore, they have a responsibility to make the right decision for the betterment of society.

Without an external oversight for mitigating negative social consequences the responsibility of ethical technological development should be taken up by companies as internal governance. For example, large firms such as Microsoft and Google put a lot of effort into developing standards for the responsible use of artificial intelligence. Struggling with the decision to develop technology for ICE and the department of defence, the workers in these companies were trying to halt the cooperation with immigration agencies, law enforcement, and militaries. The internal governance was implemented to ensure that the technologies they build are used for good, and not for harm. Internal governance that operationalizes the companies ethical values is critical for companies not just when they decide who they sell to but also who they work with and use as outsourcing vendors. The acceptable use of products also needs to be considered and firms should consider carefully not just the intended uses but also the unintended uses as well. Firms developing disruptive technologies like artificial intelligence need to consider carefully the different stakeholders’ perspectives to ensure that the company's core values are inline with the public’s interest, and these values are properly imbued in their products.

The values of a company are operationalized during the technology design process and this activity is almost entirely a closed process. Technology design requires collaboration between different functions such as finance, management, engineering, and R&D, but has little to no outside influence. Because of the necessary secrecy around developing intellectual property, this is often a very closed internal group and as such subject to the bias of that group. Also, the interdisciplinary nature of technology development and implementation means that corporate goals around morality and ethics are difficult to translate into actions across all functions. There is often no representation of the broader stakeholders interests in the separate siloed groups developing the products, nobody to present the views of the society during this phase. Priorities on speed, and functionality take precedence to morality and societal impact because of the nature of the design process and those tasked with its success. For this reason, it has to be in part the moral responsibility of the companies to ensure that the products and services they are designing at the very least mitigate adverse human rights impacts and at the most actually make society a better place.


Implementing responsible technologies especially in AI could potentially strengthen competitiveness of firms. A new study from 1. The Economist Intelligence Unit (EIU) showed that executives are increasingly seeing the value of “tech for good“ because firms have the opportunity to align their companies’ values with employee’s beliefs when they implement responsible AI. This alignment, according to the EIU study, can increase employees’ satisfaction, productivity, and have a positive effect on talent retention and talent attraction. Implementing responsible AI is also good for product and service innovation, as it helps companies stay ahead of the curve by innovating new product features that take into account privacy, bias, transparency and security. These enhancements in security and privacy controls, help firms build trust with customers, which then brings bilateral benefits that ultimately help customers feel more comfortable using their AI products, which allows firms to transparently collect more data for AI training. Besides building trust with customers, implementing ethical AI can also help firms strengthen their perceived trust and brand. Because there is an increasing lack of trust with tech companies due to the recent scandals, firms’ attitude toward ethical AI can affect how the society evaluates companies. This evaluation can translate directly to increased investment and share price improvements. The profitability of firms will improve as more shareholders decide to do responsible investing, and governments will be aligning with responsible AI principles. Therefore, there will be more procurement and contracts for these responsible firms. Taking action on moral responsibility can be a good opportunity to be thought leaders which can help attract more investors and better talent. Last but not least, self-regulation as an early preparedness can help firms set the rules and help them not lose time when regulation is actually enforced.


For the financial benefits of implementing responsible AI to be realized, a firm will have to operationalize the ethical principles at every step in the AI development process. According to a recent white paper commissioned by the 2. World Economic Forum, one method for implementing a process for responsible technology development uses a combination of both ethics‑based and human‑rights‑based approaches. A human‑rights‑based approach provides a universal foundation upon which various ethical frameworks, choices and judgments can be applied. Whereas the Ethics‑based approach helps developers follow a framework for decision making where right and wrong, good and bad, are not clearly defined, which can be very useful when different traditions, cultures, countries and religions may choose different outcomes. The implementation of these frameworks will need to take place in the different stages of the product development cycle. First during the design and development phase, second during the deployment and sales phase and third during the application or product use phase. Each of these phases will have its own set of challenges and opportunities. It will be important that all the right groups at each phase will be activated and provided with the right tools, frameworks, procedures and training. Acceptable use policies will need to define what customers can do and can’t do with the products once they have been sold and best practices and risk-migration priorities should also be provided. Transparency can be increased by sharing the science behind the technology and increasing mutual understanding around how it works. Stakeholders from all areas of society will need to be involved, especially vulnerable populations and marginalized groups that might be affected. When firms understand the impact and their products will have on society and make good strategic decisions, only then they can unlock the benefits of developing responsible AI.


Given the closed nature of technology development, companies have a responsibility to act as moral guides when developing and implementing technologies like AI. There are clearly many benefits to be realized if firms can successfully implement the right tools and frameworks for operationalizing values that take into account the needs of society and mitigate harm. Earning the trust of consumers is going to be an increasing priority and a potential competitive advantage. For the moment the moral decisions surrounding the design, development, and implementation of these technologies are in the hands of the technology companies. So firms that lead the way in ethical development can not only put their companies on the right path for continued success, but help steer us toward a better future.



Citation and Resources:


1: Staying ahead of the curve – The business case for responsible AI PDF

https://pages.eiu.com/rs/753-RIQ-438/images/EIUStayingAheadOfTheCurve.pdf


2: Responsible Use of Technology World Economic Forum PDF

http://www3.weforum.org/docs/WEF_Responsible_Use_of_Technology.pdf


Authors: Erica (Yi-Chia) Chu, The Department of Bioengineering, College of Engineering and UW Medicine & Jesse Arlen Smith, President, Aiforgood Asia