James Kotecki (00:08):
This is CES Tech Talk. I'm James Kotecki, bringing you another special conversation. This time from day two of CES in Las Vegas, we convened a stellar media panel to share their ideas and insights that are bubbling up here at the show. Enjoy this discussion from the heart of the world's most powerful tech event.
Laura Ambrosio (00:30):
All right, thank you everyone. So my name is Laura Ambrosio. I'm the communications manager at the Consumer Technology Association, which produces CES, where we are today. So excited to have a great panel of top tier tech policy reporters with us from Axios, Politico, and Bloomberg for a great discussion on all things tech policy, from AI to privacy. So, I'd like to welcome our panelists and have them briefly introduce themselves.
Ryan Heath (00:55):
Hi everyone, I'm Ryan Heath. I'm the global tech correspondent at Axios, and I write the AI+ newsletter for them.
Steven Overly (01:03):
I'm Steven Overly, I host Politico Tech, which is a daily podcast about all the ways that technology is disrupting our politics and policy.
Oma Seddiq (01:12):
Hi everyone, I'm Oma Seddiq, and I'm a tech policy reporter for Bloomberg Government and I'm based in Washington, DC.
Laura Ambrosio (01:19):
So before we dive in too deep, I want to bring up that this is all of your first times attending CES in person. So I want to get your thoughts. What's been the most surprising thing so far?
Ryan Heath (01:31):
For me, it's the absolute breadth of people that are exhibiting here. I know that the association has put in a lot of effort to broaden the discussion as well. So we've been really able to do everything from looking at assistive technologies, to green technologies, to the gadgets that you know and love from CES, and now the policy discussions as well. So I've been impressed by that.
Steven Overly (01:53):
I think the most surprising thing to me is some of the sessions around challenges or issues that technology may present. I think that there is a more nuanced conversation happening at CES than maybe I initially anticipated there would be.
Oma Seddiq (02:07):
I think I agree, just the sheer amount of people, which is great because we're all gathered here to talk tech. I think one of the cool things I've seen on the consumer tech side of things is the LED Samsung screens. So I got a chance to just go around and be a little bit of a participant in addition to the policy conversations and discussions. So just going around the booths has been pretty cool.
Laura Ambrosio (02:33):
So on the consumer or policy side, what's still on your agenda? What are you looking forward to seeing?
Steven Overly (02:39):
I mean, I think the big theme this year is obviously AI. I mean, I met yesterday a South Korean makeup company using it to match your foundation, another company using it to customize the ergonomics of your desk chair. And so I think I'm looking forward to the rest of the show, seeing all of the ways that AI is being applied and being built into these products we use every day.
Oma Seddiq (03:03):
I just came from the panel of discussions with a series of commissioners, so I think I'm looking forward to more policy-focused conversations and getting a new industry perspective because I think being in Washington day in day out there tends to be more of a skeptical view of AI, whereas here I think there's a lot of excitement and passion around the products and the innovation that's coming out of it. And so I think this is offering a new perspective, more of the industry perspective of where AI could actually do a lot of good rather than a lot of the doomsday you hear on Capitol Hill.
Ryan Heath (03:49):
And not just do good, but there is a lot of technology that makes life a little bit better. And the thing that I'm most excited by is the technology that can transform a life. And so I've been looking or noticing a real trend in technology that helps people take charge of their lives and that makes the most difference if you have a disability or some kind of impairment. And we've got headsets that are replacing guide dogs for a fraction of the cost that really anyone in the world might be able to have and where they can't with the guide dog. And that excites me a little bit more than just something that is convenient.
Laura Ambrosio (04:26):
So, Oma, you mentioned the conversation with the commissioner session this morning. One of the highlights of the policy program at CES is the Innovation Policy Summit, where we feature government and industry speakers and we talk about all different issues from digital trade, self-driving vehicles, AI. What are some of the sessions that you guys have attended? What were the key takeaways?
Oma Seddiq (04:50):
Well, from the commissioner's session just now, both the FCC commissioners emphasized the need for Congress to reauthorize their spectrum auction authority. It's been almost a year since Congress let the authority lapse. And so I think re-upping that conversation and bringing it back to the forefront as Congress kicks off this year and has a lengthy list of priorities to get to, I think it was an opportunity to bring that back to policymakers focus.
Steven Overly (05:22):
Yeah, I attended a session yesterday about basically how AI is going to change the way we work and what that means for workers. And it was interesting to hear the acting labor secretary, Julie Su, and the AFL-CIO President, Liz Shuler, talk about how we're in this moment where we have big decisions to make about where we want to integrate AI and where we don't, what regulations we want to have around AI to manage its risks and what the long-term consequences of that will be for the US workforce. And it was not surprising, but interesting to hear that topic come up in several different sessions I attended yesterday, including the keynote speech from Walmart CEO Doug McMillan.
(06:07):
Where yes, Walmart seems to be very enthusiastic about embracing AI, integrating it into its stores in many ways, but he was very quick to address this real anxiety around whether AI will displace human workers and how Walmart is grappling with that. And so to me, I think that's a really interesting undercurrent as we see all of this enthusiasm about AI. There are also real questions about what it means in the long term for our daily lives.
Ryan Heath (06:38):
Yeah, the Walmart keynote for me really crystallized a thought that I'd had in my head for a while about we've been having a quite distorted AI debate, which I don't want to say it's been hijacked by one company, but we talk about a general technology and we talk about one form of AI through these chatbots, and AI exists in lots of other forms. And if it's going to drive our economy and it's going to achieve a lot of enthusiasm, people have to come along with that journey. And I think that Walmart was nailing that on the head yesterday. They were understanding that if AI is going to succeed, it needs to address specific frustrations and you need to have conversations along the way so that people see why they should be using it or be part of that process. And so I think that incremental approach, it could be less sexy, but it's probably going to be more effective in the end. So I think they get points for that.
Laura Ambrosio (07:30):
So obviously AI is dominating the news. It's one of our biggest trends at CES this year, from robots to autonomous vehicles. What are some of those applications that have really stood out to you so far at the show?
Oma Seddiq (07:43):
I haven't gotten a chance to see it yet, but I did read about it, about Volkswagen introducing ChatGPT into their line of electric vehicles. And so the ability to then speak aloud, and it's voice enabled, I believe, and ask questions while you're driving and incorporating chatbots into numerous different technologies, I think is fascinating.
Steven Overly (08:12):
Yeah. I think anytime a new technology is introduced and is really hot and buzzy, it's always fun and fascinating to me to see the interesting ways that people wedge it into products where it may not even really belong. Sometimes people use technology for the sake of technology. And so I think, with AI being so big this year, it's sometimes the small products where it's like an AI-enabled toothbrush or an AI-enabled desk chair where you think, I don't know that this is solving a real problem that I have, but it's cool, I guess. And being cool, maybe that's enough.
Ryan Heath (08:55):
And sometimes low tech is good. There's going to be a bunch of sodium batteries that come onto the market that are going to transform electric vehicles and autonomous vehicles, and that might not be the highest tech part of the process, but if you don't have it, you don't get the efficiencies needed to let all the high-tech stuff be really cool. So I think that's great. And then I think I mentioned it already, but I was testing a headset yesterday for people with vision impairments or people who are totally blind, and that's basically repurposing the LIDAR computer vision technology that will be core to autonomous vehicles taking over the roads one day. And if you operate that at about 75% of its capacity, you can put it on someone's head and totally transform their lives. And I don't think that was the original vision for the tech, but it is a fantastic second-generation purpose for the tech.
Laura Ambrosio (09:49):
So we're seeing all these different great application solving problems, improving accessibility, mobility, healthcare, but just like a lot of new and emerging technologies, there's also concerns and risks. So CTA released a policy framework that seeks to provide guidance that will balance those risks with also allowing the innovation to thrive. So what do we think that AI regulation is going to end up looking like?
Ryan Heath (10:18):
A long way away, that's what it's looking like. Look, we're having smarter conversations about AI at a faster pace than we did around social networks, for example, in earlier generations of technology. So that's a positive. And because this is such a transformative set of technologies, you have to have those conversations early and not make major fundamental errors at the beginning of this long journey. So that's great, but don't expect any comprehensive law coming out of Washington this year is what I would say. But Steven and Oma can build on that detail.
Oma Seddiq (10:50):
You hit the nail on the head, Laura, with striking that balance is what regulators are really trying to seek to do right now, in terms of grappling with AI's many risks, widespread misinformation, bias, we mentioned jobs earlier. And so being able to mitigate some of those risks while at the same time promoting a lot of these benefits of the technology that we just mentioned. And I agree, I don't think that we should be anticipating something sweeping in the way of CHIPS and Science Act, but I think incrementally we'll beginning to see some legislative action. And I anticipate that to happen, at least we'll see the beginning of that this year.
(11:42):
One area in which lawmakers are trying to move quickly on is addressing the issue of deepfakes. It is an election year, and so that's of concern to many, many lawmakers who are running for office, running for reelection. And so I think combating that misinformation through deepfake campaign ads is really top of mind. And so we could see potentially movement on there. And then also another key area is competition and maintaining the United States' competitiveness in this area, particularly against China. And so there are elements in which we could see some piecemeal approaches happening. And so far the conversations on the hill have been very bipartisan. It would be smart to take action while it remains bipartisan, because then it can always steer on the track of turning partisan, so.
Steven Overly (12:37):
It's interesting because I remember covering tech 15 years ago and the dawn of social media and what I like to think of as the gee whiz years of tech coverage. Where every social media platform, every app, it was look at this cool new thing. And there really was not much consideration for risks and downsides. And now with AI, as it's getting off the ground, it's actually a much more balanced conversation in many ways. I know there are many in industry who probably think it skews a bit too negative, but I do think there's at least this recognition both from the industry and from regulators that, okay, this technology does have some pitfalls potentially. How do we stop them?
(13:17):
I am inclined to agree with Ryan, that I don't think Washington is going to do anything major on this this year, perhaps some niche legislation addressing very specific issues with AI. But the reality is, I think in the near term, any real AI regulation is going to come from either overseas, obviously the EU has its AI act or from the states. I would not be surprised this year to see more states in the US passing a high legislation. Whether that's comprehensive or more tailored bills, I think remains to be seen, but that's likely where the first moves are going to happen.
Ryan Heath (13:54):
A couple final thoughts. I'll be brief, I promise there is an advantage to the US moving a bit slower than the EU or some other jurisdictions on regulation, which doesn't diminish the need for the regulation, but the EU didn't properly anticipate generative AI and its impacts, so it had to make a bunch of last minute pivots to its law and it's very hard to future-proof these things. So I think the US has a second or third mover advantage because it's taking a bit of time and hopefully the Biden executive order will fill some of that space so that you don't get too many of the downside risks popping up in the meantime.
(14:31):
And then one other thought, there is a real tension between those people that talk about the existential risks, the robots will control us, humanity is doomed, versus the more mundane discrimination and bias and workplace risks. And I think we have to take that debate really seriously because a lot of the people that focus on the existential risks, that gives them time to play and cause problems on the more mundane areas, and they're the people that only get hurt when all of humanity is exploding or dying. Whereas there are a lot of vulnerable groups that will get hurt in the meantime if we don't fix some of those more mundane aspects of AI end perks.
Laura Ambrosio (15:08):
So you've all touched on this, but with the technology, AI technology, quickly evolving and we're starting to see those movements on the policy front from the executive order, the Senate forums, but what will happen if the United States does not lead on AI technology? And, Oma, talked about the competition with other countries, what will be the consequence there?
Oma Seddiq (15:30):
I mean, we are in a very different area from, as Ryan mentioned, the concerns around social media. I think that's a prime example of what happens when the US does not take a lead on regulation or policy. And now they're grappling with the consequences of that, particularly in terms of consumer privacy and safety. And now a lot of that debate is around children's safety and online safety and privacy. And so it's as if the US is now still playing catch up to try and prevent some of these harms that have already happened.
(16:09):
And I think you can anticipate a similar fate but at scale, because AI impacts just about everything, if US policymakers were to just sit back and not do anything and let the rest of the world put the measures in place. At the same time, I think the US has certain principles and values that it wants to put out there, particularly as it's trying to position itself against China. And so letting countries like China take the lead in terms of AI policymaking allows them to set the scope and set the principles of what AI should be like around the world.
Steven Overly (16:53):
I mean, I think in some ways the consequences of that really depend on who you're talking to and what vantage point you're looking at it from. Because I mean, the reality is, certainly when it comes to tech regulation, the US has not been a leader on most things. I mean, the EU has led the way, for instance, on privacy regulations. The EU has led the way, along with some others, on regulations around online harms and speech. And so the sky has not necessarily fallen. What's happened instead is a lot of those EU or foreign regulations wind up getting applied in the US because companies just want to set a global standard for themselves.
(17:32):
And so you could argue whether or not that's a good thing or not, but I think one consequence of this that probably doesn't get discussed enough is the disparity in the rights that American consumers will have compared to consumers in other countries. Already Americans have a different set of privacy rights than European citizens do. They're going to have different set of protections around online hate speech and disinformation than EU citizens have. And so I think that's probably a consequence that does not get enough attention, is what rights do Americans lack that maybe other consumers of technology have?
Ryan Heath (18:16):
On the China front, a couple of thoughts. I think there are things to worry about in a competitive sense that we have already successfully started to worry about. And so the big one there is chip production, and that is a proxy just for anything where the US or the coalition of democracies might be unnecessarily dependent on China. You don't want that dependence. So funding a proper industrial base, having advanced chip manufacturing and democracies is really important and we're on the right track there. But then think about the ways AI might develop in China. It's not really very appealing in an economic sense to a lot of people. What are you going to do? Buy a bunch of surveillance products from China? That's their leading edge of AI in lots of respects, and that's not a great thing.
(19:04):
So we don't need to worry that China is, well, we do need to worry that they're mass-producing surveillance products, but we don't need to worry in the sense that those companies and products will out-compete American offerings. We're much better off having an open ecosystem in America that produces things as a result of debate and with some regulatory constraints around it because they are things that most people around the world will actually be much more interested to buy than Chinese-made surveillance systems.
Laura Ambrosio (19:31):
So I want to go back to Steven's point, about how AI and data privacy are really intertwined and much of the tech industry, including CTA, has advocated for a national privacy law to get away from that patchwork of state laws, provide more consistency and clarity for businesses, but also consumers. What are you hearing in your reporting? Do you think we're going to see a national privacy law anytime soon? What's it going to take?
Steven Overly (19:58):
The fact that you could ask us that question pretty much any year of the last two decades or so, it suggests to me that this might not be the year that that happens. But what's interesting, I think, I was just having a conversation about this on my podcast. If you look at the state privacy laws that are passing, a lot of them are very similar. And so instead of really a patchwork of state laws, you're actually starting to get a little bit more of a de facto national standard that's emerging. I don't think they're identical, but there's a lot of similarities. And so I think one interesting trend to watch will be as more states adopt a similar data privacy law as they set a de facto national standard, I don't think that that will make national federal data privacy legislation irrelevant, but perhaps it eases the passage a bit if the federal law mirrors what's happening in the states. I think that's the dynamic to watch in my view more so than any extreme movement on Capitol Hill.
Oma Seddiq (21:02):
States tend to move faster than Congress does. We're already seeing that also with AI as states like California, Connecticut, Michigan start to introduce committees to review AI or already passing legislation related to AI. I think on the privacy front, there is definitely a bipartisan appetite there, of course, to set a national data privacy standard. What that may look like is where there's disagreement. But I do believe that with the onset of AI, the attention that it has in terms of intertwining privacy within that, because there are a host of privacy concerns, there's a lot of concern now of how do we approach AI? Well, maybe we start with a national privacy standard as the baseline foundation of AI regulation. And I think that idea is getting a little bit of momentum. There is some criticisms of that out there as well. But I think more and more you're seeing that prop up in conversations on Capitol Hill.
(22:08):
So I think, whether or not they'll be able to do that, again, it's up in the air, but the fact that they're approaching AI through this lens and not leaving the privacy discussion behind and it's all coming together in one could prove fruitful.
Laura Ambrosio (22:25):
And we can't talk about privacy without also talking about cybersecurity. With more devices becoming connected to the internet, that's great for our everyday lives, but that also opens up the doors for bad actors. So CTA has worked to create with the government, the US Cyber Trust Mark, and so that's helping consumers make more informed decisions about their products that they purchase. What other measures are you seeing other public-private partnerships that can help protect consumers?
Ryan Heath (22:54):
The biggest problem is the skills gap around cybersecurity. I think, well, there's two things. The problem often isn't the tech, it's us. So we're the back door that messes up a lot of things, so we have to take personal responsibility and be aware of how to take that responsibility. But there's just a massive skills shortage, like government in particular struggles to keep up here, which is not great from a national security perspective. So I'm all for the trust mark, but then we have to do underlying work as well because there is a pipeline that feeds our ability to have cybersecurity and you don't fix that overnight.
Steven Overly (23:33):
Yeah. I think one really interesting and probably pretty consequential trend is the rise in red teaming and sort of these collaborations between industry and government where they will do coordinated and deliberate attacks on AI models, on core infrastructure, on space stations, and essentially mimic the bad actors and try to identify vulnerabilities. I mean, to some extent that has always existed, but I think now it's really been normalized and I think you've seen the federal government embrace it quite a bit, which is pretty significant. And for instance, when the White House announced an agreement late last year with AI companies, red teaming was built in there and protocols for securing their AI models were in there. I think that's really important. And I think furthering that trend where the government and industry work together to address vulnerabilities, especially in these technologies that we all interact with, will be really important.
Laura Ambrosio (24:38):
Anything else? So, Oma, you mentioned attending the FCC Commissioners discussion this morning, with all of these devices becoming more connected to the internet from smartphones to smart homes, what are you hearing about the FCC's priorities for the coming year and what else can be done to help bridge the digital divide?
Oma Seddiq (24:58):
Well, I think number one is more funding for the Affordable Connectivity Program. We're seeing Congress get letters every day about that and lawmakers themselves calling for more funding to be put towards that program. So I think that's a huge priority of the FCC's this year. Also, spectrum, as I mentioned, because right now they're pretty much powerless to auction off new bits of these invisible radio airwaves that fuel all our wireless communications. And so I think having Congress prioritize those two things is a bit tricky considering the amount of work that they have to do, and it is an election year, and so a lot of them won't be there for parts of the year, but the FCC is still moving forward with its own proposed rulemaking in a variety of other areas that they're pursuing this year as well. So they still have their hands full in the meantime.
Ryan Heath (26:01):
On the spectrum point, I think a key around the world to unlocking spectrum for this proliferation of devices is negotiation with the military who are notoriously unwilling to say that any of the spectrum that they were handed decades ago could possibly ever be used for any of these devices, even though their own reasons for needing that spectrum is potentially very occasional or emergency-based. So I think there are ways to make sure the military can have access to that spectrum when they need it, either for tests or for conflict-driven reasons, without stopping all the rest of us having devices that work. But that takes more than the FCC to solve, but it does also take a willing partner in the Pentagon and probably the White House as well. So I would keep an eye on the White House trying to make a bit more of that discussion happen.
Laura Ambrosio (26:53):
So before we wrap up, give me your quick projections for 2024. What do you expect to see either at Congress, the administration, or at the state level this year?
Steven Overly (27:04):
I expect to see a lot of state level action on tech. We saw an increase in the number of state tech bills that passed in 2023. I think that's likely to rise further in 2024, especially as more states grapple with A.I. So if you're talking about US tech policies, state capitals are really where the action will be.
Oma Seddiq (27:25):
I think AI is going to continue to dominate the conversation on Capitol Hill. I think 2023 was their introduction to it, and there was a lot of this exploratory phase, educational phase. I think this year will test whether any of that exploration and education will actually translate into any concrete legislative action.
Ryan Heath (27:48):
And I'd keep an eye on the elections. I think that there are big risks, but that because we're all talking about it, probably those risks are going to be fairly well managed. You'll see a couple of explosions, but I think generally speaking, it will be under control.
Laura Ambrosio (28:03):
Well, I want to thank our great fantastic panel. We had a wide-ranging discussion today about different tech policy issues, so let's give them a round of applause.
James Kotecki (28:16):
Well, I hope those media insights from CES 2024 give you some ideas of your own. That's our show for now, but there's always more tech to talk about. So hit that YouTube subscribe button. Leave a comment, follow us on Spotify, Apple Podcasts, iHeartMedia, wherever you're getting the show. Get more CES at CES.tech. That's C-E-S.T-E-C-H. I'm James Kotecki, talking tech on CES Tech Talk.