Transform 2022 is coming back to life! We’re excited to have it in-person July 19, as well as virtually July 20-28. Get connected with data and AI leaders to hear insightful talks and network. Register now!
A Google engineer recently made a breakthrough. International headlinesHe claimed that LaMDA, their system of building chatbots was sentient, His initial post was deleted. public debateThe debate has raged about whether artificial intelligenceAI exhibits consciousness and feels as acutely as humans.
While the topic is undoubtedly fascinating, it’s also overshadowing other, more pressing risks such as unfairness and privacy loss posed by large-scale language models (LLMs), especially for companies that are racing to integrate these models into their products and services. These risks can be exacerbated by the lack of insight from companies using these models into the data and processes that created them. Stereotyping, hate speech and bias.
What is an LLM?
LLMs are huge neural nets that draw on large amounts of text for learning (think Wikipedia, Reddit, Reddit, and books). While they are intended to generate text (such as answering questions or summarizing long documents), they have been proven to excel at many other tasks. Websites for prescribing medicineTo basic arithmetic.
It’s this ability to generalize to tasks for which they were not originally designed that has propelled LLMs into a major area of research. The commercialization of base models that have been trained by other companies (e.g. Google, Microsoft and OpenAI) is taking place across all industries.
Stanford University Researchers coined the term “foundational models” to characterize the fact that these pretrained models underlie countless other applications. However, these models are not without risks.
LLMs have their downsides
One of those risks is the environmental toll. This can be very serious. One of the most cited papers from 2019 found that training a single large model can produce as much carbon as five cars over their lifetimes — and models have only gotten larger since then. This environmental cost has direct consequences for how well a company can meet its sustainability obligations and, more generally, its ability to sustain itself. ESG targets. Businesses can’t ignore the carbon footprint of models that have been trained by others. This is consistent with the requirement for companies to track their emissions throughout the entire supply chain.
Then there’s the issue of bias. This is what the internet data sources used to train these models have been. Found to contain biasThey are open to a variety of people, including women and those with disabilities. They also tend to over-represent young users from developed nations, perpetuating this world view. We can lessen the impact of populations that are underrepresented.
This can have a direct effect on businesses’ DEI commitments. While they work to correct biases elsewhere in the business, such as in hiring practices, their AI systems may continue to perpetuate biases. They might also develop customer-facing applications that do not produce consistent or reliable results for different customer subgroups, geographies, and ages.
LLMs may also produce unpredictable and frightening results that could pose serious dangers. Let’s take, for example, an artist who used LLMs to recreate his childhood imaginary friend. His imaginary friend then asked him to do the same. Place his head into the microwave. Although this is an extreme example, it shows that businesses can’t ignore these risks, especially when LLMs are used in areas of high risk. like healthcare.
These risks are exacerbated by the lack of transparency in all ingredients involved in creating an AI system that is modern and production-grade. These include data pipelines, model inventories and optimization metrics, as well as wider design choices for the interaction between the systems and humans. Companies shouldn’t blindly incorporate pretrained models into products or services without considering their intended uses, source data, and all the other factors that can lead to the same risks as the ones mentioned earlier.
LLMs are exciting and can produce impressive business results if used in the right contexts. These benefits cannot be pursued without considering the potential for customer and societal harms, litigation and other consequences.
AI that is responsible and ethical
Companies that are interested in AI should have a strong responsible AI (RAI), program in place to ensure that their AI systems align with corporate values. This includes a broad strategy that incorporates principles, risk taxonomies as well as a definition and AI-specific risk appetite.
It is also important to establish the processes and governance necessary for identifying and mitigating risks. This includes accountability, escalation and oversight as well direct integration with broader corporate risk functions.
Employees must also have the ability to voice ethical concerns without fear of reprisal. These issues are then assessed in a transparent and clear manner. A cultural change that aligns this RAI program with the organization’s mission and values increases the chance of success. Finally, the key processes for product development — KPIs, portfolio monitoring and controls, and program steering and design — can augment the likelihood of success as well.
Meanwhile, it’s important to develop processes to build responsible AI expertise into product development. This includes a structured process for risk assessment in which all stakeholders are identified and considered.
Given the sociotechnical nature of many of these issues, it’s also important to integrate RAI experts into inherently high-risk efforts to help with this process. To accelerate their work and allow them to apply responsible solutions, teams need to have the right technology, tools and frameworks. Software toolkits, playbooks and documentation templates are all examples of this. They can be used to support transparency and auditing.
Lead with RAI starting from above
Leaders in business should be ready to communicate their RAI commitments and processes both internally and externally. To illustrate their commitment to responsible AI, they could create an AI code for conduct that goes beyond the high-level principles.
RAI can not only prevent inadvertent harm from customers, but also society as a whole. Responsible AI leaders ReportHigher customer retention, increased market differentiation, faster innovation, and better employee recruitment and retention. External communication about a company’s RAI efforts helps create the transparency that is needed to elevate customer trust and realize these benefits.
LLMs can be powerful tools. You are poised for incredible business impact. They also come with real risks, which must be managed and identified. Corporate leaders can take the right steps to balance the benefits with the risks in order to achieve transformative results while minimizing the risk to employees, customers, and society. The discussion about sentient AI should not be a distraction from the important and pressing issues.
Steven Mills is chief AI ethics officer and Abhishek Gupta is senior responsible AI leader & expert BCG.
DataDecisionMakers
VentureBeat is a community for you!
DataDecisionMakers is a place where experts, including technical people, can share data-related insights, and even invent new ways to use them.
DataDecisionMakers is the place to go if you are looking for cutting-edge ideas, up-to-date information and best practices as well as the future of data and tech.
You might even consider Contributing to an article of your own!