Microsoft and the chipmaker Nvidia are the latest companies to take the hot seat in a series of Senate judiciary hearings on artificial intelligence as the federal government continues to grapple with how to regulate the technology.
Microsoft’s president, Brad Smith, and Nvidia’s chief scientist, William Dally, are testifying on Tuesday alongside Woodrow Hartzog, a professor of law at Boston University School of Law.
At the start of the hearing, Senator Richard Blumenthal urged a risk-based approach to regulating AI. Earlier this week, he and Josh Hawley, a Missouri Republican, introduced a bipartisan AI framework that would require companies to register with an independent oversight body that would be tasked with licensing high-risk AI technology. The proposal, the full text of which is not yet publicly available, also calls for Congress to clarify a section in the Communications Decency Act of 1996 that does not protect tech companies developing AI tools from liability and potential lawsuits.
“Make no mistake, there will be regulation,” Blumenthal said. “The only question is how soon and what. It should be regulation that encourages the best in American free enterprise but at the same time provides the kind of protections that we do in other areas … We need to make sure that these protections are framed and targeted in a way that applies to the risk involved, risk-based rules.”
Both Microsoft and Nvidia have been at the forefront of the AI boom, ramping up their investment in developing and utilizing aspects of the AI supply chain. Microsoft invested in a series of partnerships as well as its own in-house AI technology, Copilot. In addition to its $10bn investment in the ChatGPT owner, OpenAI, Microsoft partnered with Meta on the release and support of the social media platform’s open-source large language model Llama 2. Nvidia, for its part, has benefited from its early investment and focus on building computer chips for AI systems, raking in more than $13bn in revenue in the second quarter. The 30-year-old company is now valued at $1tn and is one of the biggest beneficiaries of the AI boom, with its chips powering many of the world’s major AI tools including ChatGPT.
As efforts to rein in these technologies continue, digital advocacy groups warn that tech companies cannot be trusted to regulate themselves and anything Congress comes up with should be cognizant of that.
“Big tech has shown us what ‘self-regulation’ looks like, and it looks a lot like their own self-interest,” said Bianca Recto, communications director for Accountable Tech. “Senators must go into this week’s AI hearings with their eyes wide open – or risk once again getting fooled by savvy PR at the expense of our safety.”
In opening testimony, both Microsoft and Nvidia commended the Senate on its work to create a legal framework that would require “high-risk” AI to be certified by an oversight board, being sure to draw a distinction between advanced AI and less capable systems. However, Hartzog urged Congress to steer clear of half measures and industry-led approaches like “encouraging transparency, mitigating bias, and promoting principles of ethics” without also implementing means to enforce liability and other important regulatory mechanisms.
“It’s easy to commit to ethics, but industry doesn’t have the incentive to leave money on the table for the good of society,” Hartzog continued.
While the industry representatives both agreed Congress was moving in the right direction, Nvidia’s Dally said some fears, particularly doomsday concerns around AI systems becoming sentient, were unfounded. “Uncontrollable artificial general intelligence is science fiction and not reality,” Dally said. “At its core, AI is a software program that is limited by its training, the inputs provided to it and the nature of its output. In other words, humans will always decide how much decision-making power to cede to AI models.”
The companies were also asked to answer for concerns over the existing use of AI and how it is trained. Senator Amy Klobuchar asked Smith about how AI systems use content, particularly journalism.
“We should let local journalists and publications make decisions about whether they want their content to be available for training,” Smith said. “We should certainly let them, in my view, negotiate collectively.”
after newsletter promotion
Several senators also brought up the issue of disinformation ahead of the election. Blumenthal said Congress was facing a huge dilemma as deepfakes become more sophisticated and it gets harder to distinguish from authentic images, audio or videos. “We need to do something about it, we can’t delude ourselves by thinking with a false sense of comfort that we’ve solved the problem if we don’t provide effective enforcement,” Blumenthal. “To be very blunt, the Federal Elections Commission often has been less than fully effective in enforcing rules related to campaigns.”
Hartzog suggested that Congress use several different tools and consider how surveillance advertising business models play a role in powering the technologies that allow “the lie to be created but flourish and to be amplified”. “I would think about rules and safeguards that limit those financial incentives.”
Hawley also brought up questions of data privacy and protections for children who are using AI systems. Hawley asked Smith whether Microsoft would commit to raising the age limit to use AI systems from 13 to something older. “I don’t want any kids to be your guinea pig,” Hawley said. “I don’t want you to learn from their failures. This is what happened with social media … which made billions of dollars giving our kids a mental health crisis.”
Smith first said Microsoft followed existing laws that restrict the use of child user data to advertise to them. He also said that 13 was not necessarily too young to use AI depending on what they want to use it for. “We want kids in a controlled way, with safeguards, to use tools …”
The hearing continues a big week for AI at the Capitol. On Wednesday, the Senate is hosting its first ever AI Forum, a closed-door meeting convened by Senator Chuck Schumer, who has invited several tech executives including Google’s Sundar Pichai, Mark Zuckerberg, Elon Musk and Nvidia’s Jensen Huang .