Cantwell Opening Remarks at Hearing on AI’s Acceleration and the Need to Protect Americans’ Privacy
July 11, 2024
U.S. Senator Maria Cantwell (D-Wash.), Chair of the Senate Committee on Commerce, Science and Transportation, delivered the following opening remarks at today’s hearing on how AI is accelerating the need to protect Americans’ privacy. Read the witness testimonies and watch the hearing here.
Chair Cantwell’s Opening Remarks As Delivered: VIDEO
The Senate Committee on Commerce, Science, and Transportation will come to order. I want to thank the witnesses for being here today for testimony on the need to protect Americans’ privacy and AI as an accelerant to the urgency of passing legislation. I want to welcome Dr. Ryan Calo, University of Washington School of Law and Co-Director of the University of Washington Technology Lab; Ms. Amba Kak, Co-Executive Director of the AI Now Institute in New York; Mr. Udbhav Tiwari, Global Product Policy Director for Mozilla from San Francisco; and Mr. Morgan Reed, President of ACT, the App Association, of Washington D.C. Thank you all for being here on this very, very important hearing.
We are here today to talk about the need to protect Americans’ privacy and why AI is an accelerant that increases the need for passing legislation soon.
Americans’ privacy is under attack. We are being surveilled…tracked online in the real world, through connected devices. And now, when you add AI, it is like putting fuel on a campfire in the middle of a windstorm.
For example, a Seattle man's car insurance increased by 21% because his Chevy Bolt was collecting details about his driving habits and sharing it with data brokers, who then shared it with his insurance company. The man never knew the car was collecting the data.
Data about our military members, including contact information and health conditions is already available for sale by data brokers for as little as 12 cents. Researchers at Duke University were able to buy such data sets for thousands of active military personnel.
Every year, Americans make millions of calls and text chats to crisis lines seeking help when they are in mental distress. You would expect this information would be kept confidential. But a nonprofit suicide crisis line was sharing data from those conversation with its for-profit affiliates that it was using to train its AI product.
Just this year, the FTC sued a mobile app developer for tracking consumers’ precise location through software embedded in a grocery list in a shopping rewards app. The company used this data to sort consumers into precise audience segments. Consumers who used this app to help them remember when to buy peanut butter didn't expect to be profiled and categorized into a precise audience segment like “parents of preschoolers.”
These privacy abuses and millions of others that are happening every day are bad enough. But now, AI is an accelerant and the reason why we need to speed up our privacy law.
AI is built on data, lots of it. Tech companies can't get enough to train their AI models -- your shopping habits, your favorite videos, who your kids friends are – all of that. And we’re going to hear testimony today from Professor Calo about how AI gives the capacity to drive sensitive insights about individuals. So, it is not just the data that is being collected. It is the ability to have sensitive insights about individuals in the system.
This, as some people have said, referring to [Dr. Calo’s] testimony now, is creating an inference economy that could become very challenging. That is why you also point out, Dr. Calo, that a privacy law helps offset the power of these corporations and why we need to act.
I also want to thank Ms. Kak for her testimony because she is clearly talking about that same corporate power and the unfair and deceptive practices, which we’ve already known should be given to the FTC as their main authority.
The lack of transparency about what is going on with prompts and the AI synergy is that people are no longer just taking personal data and sending us cookie Ads. They are taking that are putting that actually into prompt information. This is a very challenging situation. And I think your question is, are we going to allow our personal data to train AI models is very important for our hearing today.
We know that they want this data to feed their AI models to make the most amount of money. These incentives are really a race to the bottom where the most privacy protective companies are at a competitive disadvantage.
Researchers project that if current trends continue, companies training large language models may run out of new publicly available, high-quality data to train AI systems as early as 2026.
Without a strong privacy law, when the public data runs out, nothing is stopping them from using our private data. I'm very concerned that the ability to collect vast amounts of personal data about individuals, and create inferences about them quickly at very low cost, can be used in harmful ways, like charging consumers different prices for the same product.
I talked to a young developer in my state and I said what is going on? And he said, well I know one country is using AI to basically give it to their businesses. And I said, well why would they do that? He said, they want to know when a person calls up for a reservation at a restaurant how much income they really have. If they don’t really have enough money to buy a bottle of wine, they are giving the reservation to someone else.
So, the notion is that discriminatory practices can already exist with just a little amount of data for consumers.
AI in the wrong hands is also a weapon. Deepfake phone scams are already plaguing my state. Scammers used AI to clone voices to defraud consumers by posing as a loved one in need of money. These systems can re-create a person's voice in just minutes, taking the familiar grandparent scam and putting it on steroids.
More alarming, earlier this month, the Director of National Intelligence reported that Russian influence actors are planning to covertly use social media to subvert our elections. The ODNI called AI “a maligned influence accelerant,” saying that it was being used to more convincingly tailor a particular video and other content ahead of the November election.
Just two days ago, the DOJ reported that it dismantled a Russian bot farm intended to sow discord in the United States. Using AI, the Russians created scores of fictitious user profiles on X, generated posts, and then used other bots to repost, like, and comment on the posts – further amplifying the original fake posts. This was possible at tremendous scale given AI. I'm not saying that misinformation might not have existed before, and may not have been placed in a chat group, but now with the use of bots and AI as an accelerant, that information could be more broadly distributed very, very quickly.
Privacy is not a partisan issue. According to Pew Research, the majority of Americans across the political spectrum support regulation. I believe our most important private data should not be bought or sold without our approval. And tech companies should make sure they implement these laws and help stop this kind of interference.
The legislation that Representative McMorris Rogers and I have worked on does just that.
And I want to say… that Senator Blackburn and I will be introducing [legislation] called the COPIED Act, which provides much-needed transparency around AI-generated content. The COPIED Act will also put creators, including local journalists, artists, and musicians, back in control of their content with a watermark process that I think is very much needed.