How Businesses Can Use AI to Enhance, Not Replace, Human Work
-
bookmark
-
print
AI is poised to transform our lives far faster than any technology has before, with large language models (LLMs) like ChatGPT already augmenting human capabilities in powerful ways. Business and technology leaders, however, need to see through the hype and start understanding AI’s potential – and how to take advantage of the opportunity while also mitigating its risks.
This was a main takeaway from the AI panel at the 2023 BMO Government, Reserve & Asset Managers Conference. Moderated by Victor Tung, U.S. Chief Technology and Operations Officer and Chief Information and Operations Officer, BMO Capital Markets, the panel featured two executives on the front lines of enterprise AI: Doug Smith, General Manager of Intelligent Cloud Solutions at Microsoft, and Lawrence Wan, Chief Architect and Innovation Officer at BMO.
After attracting more than 100 million users within the first two months of its launch in November 2022, ChatGPT has suddenly forced leaders at all levels to contemplate the implications of something that not long ago was just science fiction.
An Opportunity, but Approach With Caution
As an innovation leader at BMO, Lawrence Wan sees clear potential in adopting AI to help with the bank’s internal operations. “It’s extremely powerful,” he said. “It enhances and augments our internal staff.” He noted that BMO currently uses other forms of AI to manage its expansive IT infrastructure and process the number of data points it generates in order to predict capacity issues before they happen.
But anything customer-facing is another matter. “Very rarely in the bank would we use AI to make customer decisions,” said Wan. “That would be something that you really need to think twice about leveraging AI to do.”
Wan sounded a few notes of caution about AI, the first of which was that it’s not necessarily the solution to every data analytic problem. “Sometimes using well-established algorithms or rule-based solutions is a better fit,” he said. “People need to think about when is the right time to use AI and when is the right time to actually find another solution?”
Not surprisingly, cyber risk is another primary concern for Wan. Working with AI models requires a lot of data that could include personal information, so extensive safeguards need to be in place to train the models and use the applications.
As for Microsoft, Doug Smith said the company acknowledges the security and privacy risks and is proactively advocating for regulations with governments around the world. However, he noted regulations tend to lag technology capabilities, so Microsoft needs to be thoughtful in how it provides services. Smith pointed to its decision to integrate ChatGPT into its Azure OpenAI Service, which has existing regulatory compliance, security, and governance capabilities. “That helps us create safe environments for who can use this, and governance around how they can use it,” he said.
Creating a Co-Pilot
As a major investor in ChatGPT’s developer OpenAI, using it within Microsoft’s Azure OpenAI Service provides access to the most advanced AI models backed by the trusted enterprise-grade capabilities of Azure. “We’re already starting to see some incredible use cases that are making their way into production, not just pilot programs,” he said.
Still, Smith emphasized that the goal is not for AI to replace jobs. He described Microsoft’s vision for AI LLMs as a “cohesive pairing” with human work. “We’re seeing it as an assistant, as a co-pilot,” said Smith. Rather than replacing workers, he explained Microsoft sees it as being “a massive augmenter of work, removing drudgery that everybody has and allowing them to focus on the human, higher-value parts of their job.”
Smith used the example of simply speaking to a computer to produce a PowerPoint deck that summarizes a report for your team, as well as scheduling a meeting with them to discuss it. What today might require the use of five software applications and hours of prep over multiple days could be accomplished by uttering just a few sentences. “We’ve gotten really used to software, the menus and clicks and drags and drops and cuts and paste,” said Smith, but he also said that it’s not really a natural way of working. By bringing natural language to the foreground, menial tasks and barriers to productivity could fall away.
AI Performance Reviews and Compliance Analysis
A live demo of Azure OpenAI offered another glimpse into how LLMs could improve work in a bank’s contact center. First, a phone call exchange was automatically transcribed in real-time, with all of the relevant details instantly extracted, including the customer name, product discussed, phone number, verification information and address. Azure OpenAI GPT then generated a written summary of the phone conversation and produced a performance review of the customer experience, providing three positives and three areas for improvement.
Crawl Before You Walk
As impressive as some consumer-facing AI tools like ChatGPT could be, Wan also expressed apprehension about how mysterious AI’s inner workings are. “A lot of the AI solutions are so complex that it’s unclear how it generates the outcome,” he said. “Without a certain level of transparency, it is difficult to assess and trust the results.”
Smith concurred, pointing to Microsoft’s Responsible AI principles and its efforts to develop industry standards and partner with competitors to build consortiums where they can share ideas about how to proceed safely. “We’re about doing this right, not just doing this fast,” said Smith. “The fast part is happening all by itself.”
In fact, Smith said there is likely too much hype. “One of the things we’re trying to do is to tamper down the hype,” he said.
“More businesses need to understand what AI actually is,” said Wan. “There are a lot of legitimate concerns, but also misconceptions. We need multiple disciplines, including legal or policy setting perspectives, so we can make sure AI is good not just for business, but also good for society.”
Panel moderator Victor Tung offered a final word of advice: the standard fast-follower approach most organizations use with new tech won’t cut it. “Very quickly, this is going to be embedded in everything we do, and the pace of change associated with this will be faster than anything we’ve seen,” said Tung.
Captured in photo (L-R): Victor Tung, U.S. Chief Technology and Operations Officer and Chief Information and Operations Officer, BMO Capital Markets; Doug Smith, General Manager of Intelligent Cloud Solutions at Microsoft; Lawrence Wan, Chief Architect and Innovation Officer at BMO
- Minute Read
- Listen Stop
- Text Bigger | Text Smaller
AI is poised to transform our lives far faster than any technology has before, with large language models (LLMs) like ChatGPT already augmenting human capabilities in powerful ways. Business and technology leaders, however, need to see through the hype and start understanding AI’s potential – and how to take advantage of the opportunity while also mitigating its risks.
This was a main takeaway from the AI panel at the 2023 BMO Government, Reserve & Asset Managers Conference. Moderated by Victor Tung, U.S. Chief Technology and Operations Officer and Chief Information and Operations Officer, BMO Capital Markets, the panel featured two executives on the front lines of enterprise AI: Doug Smith, General Manager of Intelligent Cloud Solutions at Microsoft, and Lawrence Wan, Chief Architect and Innovation Officer at BMO.
After attracting more than 100 million users within the first two months of its launch in November 2022, ChatGPT has suddenly forced leaders at all levels to contemplate the implications of something that not long ago was just science fiction.
An Opportunity, but Approach With Caution
As an innovation leader at BMO, Lawrence Wan sees clear potential in adopting AI to help with the bank’s internal operations. “It’s extremely powerful,” he said. “It enhances and augments our internal staff.” He noted that BMO currently uses other forms of AI to manage its expansive IT infrastructure and process the number of data points it generates in order to predict capacity issues before they happen.
But anything customer-facing is another matter. “Very rarely in the bank would we use AI to make customer decisions,” said Wan. “That would be something that you really need to think twice about leveraging AI to do.”
Wan sounded a few notes of caution about AI, the first of which was that it’s not necessarily the solution to every data analytic problem. “Sometimes using well-established algorithms or rule-based solutions is a better fit,” he said. “People need to think about when is the right time to use AI and when is the right time to actually find another solution?”
Not surprisingly, cyber risk is another primary concern for Wan. Working with AI models requires a lot of data that could include personal information, so extensive safeguards need to be in place to train the models and use the applications.
As for Microsoft, Doug Smith said the company acknowledges the security and privacy risks and is proactively advocating for regulations with governments around the world. However, he noted regulations tend to lag technology capabilities, so Microsoft needs to be thoughtful in how it provides services. Smith pointed to its decision to integrate ChatGPT into its Azure OpenAI Service, which has existing regulatory compliance, security, and governance capabilities. “That helps us create safe environments for who can use this, and governance around how they can use it,” he said.
Creating a Co-Pilot
As a major investor in ChatGPT’s developer OpenAI, using it within Microsoft’s Azure OpenAI Service provides access to the most advanced AI models backed by the trusted enterprise-grade capabilities of Azure. “We’re already starting to see some incredible use cases that are making their way into production, not just pilot programs,” he said.
Still, Smith emphasized that the goal is not for AI to replace jobs. He described Microsoft’s vision for AI LLMs as a “cohesive pairing” with human work. “We’re seeing it as an assistant, as a co-pilot,” said Smith. Rather than replacing workers, he explained Microsoft sees it as being “a massive augmenter of work, removing drudgery that everybody has and allowing them to focus on the human, higher-value parts of their job.”
Smith used the example of simply speaking to a computer to produce a PowerPoint deck that summarizes a report for your team, as well as scheduling a meeting with them to discuss it. What today might require the use of five software applications and hours of prep over multiple days could be accomplished by uttering just a few sentences. “We’ve gotten really used to software, the menus and clicks and drags and drops and cuts and paste,” said Smith, but he also said that it’s not really a natural way of working. By bringing natural language to the foreground, menial tasks and barriers to productivity could fall away.
AI Performance Reviews and Compliance Analysis
A live demo of Azure OpenAI offered another glimpse into how LLMs could improve work in a bank’s contact center. First, a phone call exchange was automatically transcribed in real-time, with all of the relevant details instantly extracted, including the customer name, product discussed, phone number, verification information and address. Azure OpenAI GPT then generated a written summary of the phone conversation and produced a performance review of the customer experience, providing three positives and three areas for improvement.
Crawl Before You Walk
As impressive as some consumer-facing AI tools like ChatGPT could be, Wan also expressed apprehension about how mysterious AI’s inner workings are. “A lot of the AI solutions are so complex that it’s unclear how it generates the outcome,” he said. “Without a certain level of transparency, it is difficult to assess and trust the results.”
Smith concurred, pointing to Microsoft’s Responsible AI principles and its efforts to develop industry standards and partner with competitors to build consortiums where they can share ideas about how to proceed safely. “We’re about doing this right, not just doing this fast,” said Smith. “The fast part is happening all by itself.”
In fact, Smith said there is likely too much hype. “One of the things we’re trying to do is to tamper down the hype,” he said.
“More businesses need to understand what AI actually is,” said Wan. “There are a lot of legitimate concerns, but also misconceptions. We need multiple disciplines, including legal or policy setting perspectives, so we can make sure AI is good not just for business, but also good for society.”
Panel moderator Victor Tung offered a final word of advice: the standard fast-follower approach most organizations use with new tech won’t cut it. “Very quickly, this is going to be embedded in everything we do, and the pace of change associated with this will be faster than anything we’ve seen,” said Tung.
Captured in photo (L-R): Victor Tung, U.S. Chief Technology and Operations Officer and Chief Information and Operations Officer, BMO Capital Markets; Doug Smith, General Manager of Intelligent Cloud Solutions at Microsoft; Lawrence Wan, Chief Architect and Innovation Officer at BMO
Highlights from our 2023 Government, Reserve & Asset Managers Conference
PART 1
Fireside Chat: Darryl White Talks AI, Banking Systems and Press Freedom
Darryl White May 08, 2023
From artificial intelligence and the health of the banking system to press freedom, almost no stone was left unturned between Darryl White,…
PART 2
The Sky’s the Limit for Affordable Housing
May 10, 2023
When it comes to affordable housing, doing the right thing and turning a profit don’t have to be mutually exclusive. That was one of …
PART 4
Assessing the Current State of the Carbon Credit Market
Eric Jacks May 10, 2023
While climate change remains an important theme for investors, asset managers, financial institutions, and governments, there continues to …
You might also be interested in
How NASA and IBM Are Using Geospatial Data and AI to Analyze Climate Risks
NextGen Treasury: Protecting Your Organization from a Cybersecurity Attack
Cloud, Data and Zero-trust: Here’s Where VCs are Putting Their Cybersecurity Investments