Doug Johnson 

This is the global race for leadership and AI panel the next session in our innovation policy Track. I'm Doug Johnson, Vice President Technology Policy for the consumer Technology Association. I'll be the artificially intelligent moderator for this session. emphasis is on artificial Of course. We're going to start with brief remarks from Darryl Issa, who essentially needs no introduction at this trade show. He is consumer Technology Association chairman emeritus, he holds 37 patents. He's been to almost 40 cS 30. Able closing in on 40 years once and hopefully future member of Congress. Ladies and gentlemen, please join me in welcoming Darrell Issa.

 
Darrell Issa 

Can everyone hear me in the back? This is an amazing crowd. I think this is about the size of the entire Consumer Electronics Show when I joined it. Thank you all for being here. As Doug said, This is my 38th winter show. I had a few of the summer shows in Chicago 38 years ago, if somebody had said artificial intelligence, they'd have been thinking about a congressional congressman speech not about the idea that machines would be, in fact, replacing the speed of innovation. With the speed of innovation by the very machine that was being programmed to innovate. 38 years ago, we would have been talking about how long it takes in COBOL, or Fortran, to write code to do a job. For all of you that are younger than 38 Cs shows. We are no longer talking about that. We're talking about the existential threat to employment and to the fabric of what we think of is national security because of this great innovation, and so the panel today is really charged both With the good part of AI, which you'll see throughout this show, and speeding up every year, the idea that more and more is done with less and less human interaction, whether it's simply asking Siri to get you to a location, or trying to, in fact, have something predict what you will want, when you will want it and where you want it, or navigating the world. On behalf of transportation. We see some of the benefits of AI. But for just a moment, I want to tee up today's real discussion in this way. On the commercial side, whoever owns AI will own the industrial revolution that dramatically reduces the amount of humans necessary to do more and more jobs, but they also will control the amount of people who are hired to in fact Be part of this innovation. So it's not a question of how many jobs you'll lose if AI succeeds in a country. It's how many jobs you'll gain because you're leading an AI that's purely economic. Let's go to the other side. No question at all. Whoever leads in AI will lead in the weapon systems that will matter. For the next. However long this planet survives. Those weapons systems, the weaponization of artificial intelligence has to be understood. peaceful use desires have to win them. The defenses have to be as strong as the offenses church. So just as our grandparents and our parents saw new innovations, including nuclear age, as a threat to society, and we found ways to balance them to ensure that our planet did not destroy itself. At I enjoys that same challenge. Now as a California, I happen to be friends with the Terminator. And we can all look at that show all 12 versions of it at this point. And we can say, Oh, it's science fiction. Ladies and gentlemen, what you're talking about here today is not science fiction. It's not the future. It's the present. And the present is where I'm glad to see this is a sellout crowd because it needs to be because this present will determine our future. Thank you all for being here.

 
Doug Johnson 

Everyone, welcome again, everyone. We'll begin our panel discussion, this point of the format here. I'll do some brief introductions of our panelists. And then we'll go right into the questions for our panel will reserve the last five minutes or so of this session for some questions from the audience as well. But welcome panel. Thank you again for joining us here. And this is a great follow on of course to sterilizes remarks a moment ago in terms of themes and topics we want to cover. But let me introduce our panel, starting at the end there with motya Fanta Nati, a member of parliament from the Republic of Italy. to his right is Addy cook, who's the North American AI policy leader. For Accenture. To her right is Michael Beckerman who's president and CEO of the internet association to his right as Alan Parker, Deputy Chief Technology Officer in the White House Office of Science and Technology Policy. And to my left is Svetlana Matt, who's legislative assistant in the office of congressman Jerry McInerney of California. So welcome again. I'll let's start out and give each of you just a minute to expand upon now your background and, and relationship to the topic of AI and then we'll get into the question, so spot on to starting with you.

 
Svetlana Matt 

Thank you so much for having me here today. My name is Svetlana and I work for congressman Jerry Mahoney who represents California's ninth congressional district. I handle the technology portfolio for the Congressman, and that includes a policy, the congressman co chairs the bipartisan a caucus in the house, and he also sits on Energy and Commerce Committee and as part of that on the consumer protection subcommittee and the Communications and Technology subcommittee, and he also sits on the House Science space and technology committee. He represent California's nine congressional district and in our district, we have the city of Stockton that recently, just a couple months ago launched an AI strategy that we're really excited about, to our knowledge. It's the first US city in the country to have a strategy. So thank you again for having me here today. And I'm really excited to be part of the conversation.

 
Lynne Parker 

So hello, Lynne Parker, Deputy us CTO at the White House. My role as it's relevant to this panel is I lead the White House's efforts to promote our nation's leadership and AI. And this is very much aimed at the policies that we need to make at the nationwide level. In order to make sure that as a nation we continue to lead in artificial intelligence. It covers a broad spectrum from research and development to innovate innovation, barriers and regulatory barriers to education and workforce to national security matters to international matters. So, these are the topics that I focus on primarily. I come from academia I most recently, but Where I came to this position was Interim Dean of Engineering at the University of Tennessee. And I previously worked at Oak Ridge National Laboratory. I spent a couple years at National Science Foundation, directing all of the investments at National Science Foundation and artificial intelligence. And I have a PhD from MIT and artificial intelligence. So I've been working in this field in my entire career. And it's It's surreal. Frankly, when you look back, Darryl, I was speaking about 38 years ago. I don't go quite that far back. But early days in my career, of thinking of how AI is impacting the world today, I never imagined it back then.

 
Michael Beckerman 

Thanks for having me. It's It was great to hear actually, Congressman eisah to the panel, and he was spot on. And I feel a little bit more encouraged knowing that he'll be hopefully heading back to Congress be better off with people like congressman isin Congress as it relates to Technology Policy. So I think that's a positive, you know, here at CES. It's interesting how it's changed with the last few years and you can't walk to feet on the floor without overhearing conversation where someone's talking about AI. And if you look at all the displays and the products, almost all of them have a artificial intelligence component or have have come about or tied to AI in some way. Even seemingly the iPhone cases or you know, I'll come with AI now. And so there's a lot of positives that AI is bringing to our lives today and in the future. But it's important that there's coordination between the private sector and the public sector, and a recognition of the differences between different technologies. Ai being used to curate your your playlist on on a music app is gonna be very different than how it's used for navigating on on ways or Google Maps. And we different from how we're using it for self driving and autonomous vehicles. And then also on the other side, you have the government uses which the congressman brought up as well. And that needs to have a whole different set of rules, you know, It's gonna be very different when you're looking at military applications when you're looking at applications for law enforcement. And we need to make sure that we have the checks and balances, and the policies in place to ensure both that one the US companies and government can be the leaders in the world. But also protections are in place for people and transparency and all the ethics that will I'm sure we'll talk about in this panel.

 
Adelina Cooke 

Yes, I'm Addy Cooke with Accenture. And I couldn't agree with Michael more. Accenture definitely believes that there is a very clear role for the private sector to play and helping to be part of the stakeholder conversations that the White House is coordinating that Congress continues to coordinate. I'm very lucky to work with both Lynn and Svetlana on these on these really important issues. I think it Accenture, one of the things that we're seeing is kind of a theme for us this year is is the scaling of AI like congressman ISIS said earlier. AI is not the future. It's the present and when we talk to clients across the The Globe. And certainly when we did a global survey recently of 1500 C suite executives, the challenge that most executives say that they're dealing with is how to scale AI. And into scale AI. They really talk about how they need the right data Foundation, they need the right multi stakeholder teams within their company participating in the process. And they need to make sure that they're they're doing it responsibly. And I think this is this is why Accenture is feel so strongly about being part of the stakeholder conversations with policymakers. Because not only is it an important realization that we're facing, that our clients are facing, but one that I know that as the public sector thinks about how to scale AI for serving citizens better is going to be crucial for them to and it's all the same things that are going to be the fundamental building blocks and we're just we're just really happy to be part of the conversation as it picks up even further steam given the White House's announcement yesterday.

 
Mattia Fantinati 

Thank you very much for your invitation. I'm multifunctionality, manager engineer. I serve as the house representative. I'm a member of Italian parliament. And I sit in the Commissioner of industry, research and development, trade, tourism. My areas of interest are the digital revolution in Italy and the eagle government. In the last term of housing, I joined the first cabinet of ages at the current is our Prime Minister, as Undersecretary of the state of the public administration. So I used to be on the secretary of the state. I am sorry, I am I'm only a member of the Parliament now. But I've been recently appointed special advisor of the new minister of digital and innovation of of Italy. It has been set up for three months ago. So is quite new. And the last year I promote, I think there's important research about the AI, the impact of AI in the labor market in 10 years. Because you know, I'm a politician, I write down low today. And in fact, all the low, it will comes out in 10 years is not a two is not to today. I think that it's quite important and belong to the five stars movement as my parties are majority party that was, was born in internet and keeping it operating in internet. So we tried to stimulate and encourage the digital democracy and fight corruption. Thank you.

 
Doug Johnson 

Thank you. Thank you, panel. Terrific. Start. Let's get into the questions. I've got one for the entire panel and those can respond and in any order, really, but the phrase in the panel title, leadership and AI, I'd like to ask What does AI leadership mean to you? The Is there? Is there such a thing as winning the global race for AI? Is it a zero sum game? Who would like to start by commenting on that question?

 
Lynne Parker 

So I'm happy to start. I think if you think about this at the national level, when you look at leadership in artificial intelligence, you look at things like where are the emerging companies coming from where the top companies at that are leading, and innovations? Where are the leading universities that are coming up with these cutting edge ideas that are now coming up now leading to even the next generation down the road of new ideas and concepts? Where's the innovation ecosystem? Strong, where we can work very closely with academia and industry in the private sector to foster this innovation. If you look at the impacts on the being able to use these technologies, as well, I think all of those things are reflective of nations leading in artificial intelligence. And we would argue that the United States is leading according to those metrics. But I would not I agree with the idea that it's a zero sum game, that one nation is leading and that it's a race and it someday you declare it a victory and now everybody else is a loser. It's not a zero sum game. I think everyone can benefit from the advantages of artificial intelligence. Of course, we need to be careful about how we use these technologies. So certainly on the panel will discuss some of these topics today.

 
Doug Johnson 

Thank you. Anyone else want to respond?

 
Michael Beckerman 

Sure. I agree with that. It's definitely not a zero sum game, you know, surely they'll be winners, but you know, maybe you'll be first place second place, third place, fourth place and not, you know, winner take all and everybody else is a loser. In terms of in terms of leadership, you know, from from the company's company's standpoint, you know, certainly that'll come from the innovation and the ability to put in transparency and safeguards to ensure there's not there's not bias or discrimination through artificial intelligence and ensuring that ethics are set up in a way that meets our common goals and standards and fosters We'd want to have for this. But certainly the transparency piece is important. And on the government standpoint, making sure that policies are in place that both encourage and allow for innovation, you know, particularly here in the United States with the rule of law that we do have, but also ensures that there are safeguards both for the government use, it doesn't mean you know, don't not have blinders and just only try to have rules and regulations on the private sector, and let governments kind of run rampant with what what they're doing, because I think if you really look at what the risks are, some of the things as, as folks have pointed out some of the higher risks our government uses of artificial intelligence, rather than companies or fortune lovers.

 
Adelina Cooke 

And I would just say, for both private sector and public sector use, you can't truly win or succeed with your AI deployment with scaling AI, unless you have consumer and citizen trust. And that's really what we're talking about here. And any race is going to need to engender trust among the population and so when your Thinking about scaling. It's not just feasibility and innovation, it's it's making sure that you have the proper governance and responsible oversight within an organization not limited just to the engineering teams. It's got to be your HR team. It's got to be your legal team, there's a bunch of stakeholders that need to be part of it. And that's, that's a little bit different than other technology trends that we've seen in the past. I would also echo what everyone else has said that it's not a zero sum game. And I think more broadly speaking, leadership is thinking about how we really maximize opportunities and help mitigate the risks at the same time. I think that's something from the health standpoint. You know, a lot of times folks come in to meet with staffers and with members and they that you hear from them, like we need to win the race, we need to win the race and I thought AI but nobody really is talking as much about how what that actually means and how we actually measured that. So when I was really happy for you to give some examples of some benchmarks and indices that helped demonstrate that and I think that we talked about winning the race and you know, against China or countries we really need to start actually talking about examples more in terms of what that means.

 
Doug Johnson 
Michael, let me turn back to us the multi I'm sorry, sorry.
 
Mattia Fantinati 

Well at since my point of view is more the standard towards my colleagues, as already told, but I think really that said the point is not the global race on a zero sum game. I'm a politician, and the writer said about the women's strategy, that I think that it can help every one of us, I am pretty sure that we have to create a sort of AI that is trust fully for the for the for the people, because we do think that the people must an assault away be comfortable to use the AI. And so we have to ensure a sort of protection for people that are using AI. So I think this I am the only European here and level. And I would like to just to describe very quickly the European approach I think is important even for the for the North. Are America stays an action for you for the commissioner European is needed. So we saw the commission and decided to set up a strategy that is based on a three important pillars, the first one that boosts the US technology color and industrial capacity and uptake di across the economy with a private and public partnership. The second one is to get prepared for the for the economic change is quite important because we don't have so much time. And the third to ensure our legal and ethical framework on the us a value on principle that are written in the Charter of Fundamental Rights or the means ethical principle, ethic by design and ethic by default.

 
Doug Johnson 

Thank you much. Michael. Let me turn back to us the Industry Association representative here and Can you can you return to your membership And tell us their general perspective on AI. And then following the trust point, by extension consumers, what consumers have to gain or benefit ultimately, as we know it now.

 
Michael Beckerman 

Sure, will trust in all the products is key, particularly now, as people can choose what they're going to use and you see all the different products and everybody's competing for users or anything else. So trust is paramount. And that's not something that you can do on the back end that needs to be built in on the on the front end. And the question for consumers You know, there's there's a lot of different things they'll be things that are simple of having a like, as I mentioned, you know, better a better playlist on, on on Spotify and listening to music that's very beneficial. You know, that's an easy use, you know, commuting to work and having the AI determine the best road route to take and traffic. I mean, that's highly beneficial. But also there are frankly, not to overstate it, but they're certainly going to be life changing and life saving. uses that are happening even today, and certainly more and more into the future because of artificial intelligence, everything from you know, the health and medical of eating doctors and diagnosing cancers and other illnesses, you know, being able to use artificial intelligence that will literally save lives, which is great. And you know, as you see all the autonomous vehicles and things like that, again, that's another application that certainly will save time and make our lives a lot you know, more efficient and easier all that but you know, people you know, less people will die in auto accidents because of autonomous vehicles and and and the way artificial intelligence is going to help enable that and so, you know, the the scale of the benefits that people's everything from, you know, the nice fun to have on on your, you know, day to day, and things that literally will save, you know, probably some people in this room and friends and family of all of us in the future. And I think that's that's making sure that's really important. We got to make sure we get to thank you.

 
Doug Johnson 

Lynne, I want to ask you what the administration is doing on AI and there's even more to than answer since Monday, right? Please tell us.

 
Lynne Parker 

That's right. So many of you I'm sure saw Michael kratts iOS earlier, talk about how on Monday we released guidelines for AI regulation. And these are guidelines that will have teeth to them legally. And the sense that agencies that have regulatory oversight over the use of AI and their regulatory authorities will be required to follow these principles as a draft, or consider the need for regulation. Now, if you read through these, they're, they're available publicly. In fact, they're open now for public comment for 60 days. And we encourage everybody to to respond to that. But if you look at them in depth, you'll see just from a process wise, because we've gotten a lot of questions about what does this mean? Is it just another set of principles. It's not just another set of principles and that agencies as they consider, as AI regulation will need to follow all of these things. The process in the United States is that we take any proposed regulation and it goes through OMB Office of Information and Regulatory Affairs. And they're tasked with their kind of the last beat gatekeeper of any regulation that comes out of the United States. And so they will then they're effectively this memo is a memo to agencies saying, this is what we're looking for when you consider any kind of AI regulation. And so it talks about a number of things. It talks about the importance of trustworthy AI, it talks about the importance of not having regulatory overreach. So the fact that, as Michael was saying, many of the kinds of uses of aI don't require regulation. So we need to consider what's already in place, what needs to be addressed, what doesn't need to be addressed from a regulatory perspective, and it also encourages public engagement, the the impact on our societies is real. And so we want to make sure that people have the opportunity to engage with the agency sister considering these principles. It goes into 10 different principles that also talks about a number of non regulatory approaches that agencies agencies should consider, such as pilot pilot projects, or sandboxing. So that we can learn about these technologies. The bottom line is that these technologies are new. And even though we have many different new in terms of the application impact on society, and there are a lot of people that are concerned about a lot of use cases. But rather than jumping immediately to saying, we're so afraid of it, we don't want to use it, we need to be able to learn. And by being able to have safe areas like sandboxing, regulatory sandboxing, where we can test out these ideas and learn what works, what doesn't work, then over time, we can achieve those kinds of benefits that are useful for everyone. So I could I could spend a whole hour talking about it, but that's an overview.

 
Doug Johnson 

There's a lot there with those principles. And there will be an opportunity for public comment you mentioned I guess, I assume as soon as it's published the 60 day clock starts.

 
Lynne Parker 

Yes. So right now. It's been posted on om B's website. If you haven't found it microcredits iOS has an op ed in Bloomberg, that gives you more of the high level idea of what's behind the principles. It will very soon land on the Federal Register. I haven't had time to check today. But very soon it'll land on the Federal Register. And that's when the 60 day clock will begin that will tell everyone how to go about submitting comments and so forth.

 
Doug Johnson 

Terrific, good opportunity for public engagement. Mattia, I'd like to turn to you, intelligentsia artificiality, right? So you've talked about your role as a politician and dealing with tech policy. How about in dealing with AI in particular, what's top of mind as you're working with your colleagues in Parliament as you're hearing or interacting with your constituents?

 
Mattia Fantinati 

Yep, sure. Opposition is my role is to play the human side of AI I think this is important because I have to create the consensus, or the artificial intelligence among the people towards do the AI important to make the people comfortable to use that. And also, I am trying to lead people to understand that the artificial intelligence is an opportunity for the alive for the personal and professional life in order to, to create a sort of feelings to use that. And I also think that it's very important for a politician to have a long view of the strategy along vision. Because otherwise, if you're a politician, and in good faith, really, you try to change the law in order to solve the way to regulate something that runs faster than, than the time that you take to write down the law. What you idea and what you really create is bureaucracy. So we have to fix the pillars and then And then build a strategy on. That's because in Italy, you have set up a sort of strategy that will come into force in three months. So it's more or less was confirmed. And we are helping that the trust of all the people to use an eye we are our priorities about the public administration, those are CT, healthcare, transportation, to Jews and college, of course, industry. And we have for them This strategy is focused on the small and medium enterprise, the 99%, of Italian company are smaller and medium. So it's such different from the US company, the China's one as well. And, and, and so, so we tried to, to, you know, to attract a lot of skill from from abroad, even from the US in the front for North America, from China to help our companies to develop the I knowledge that because we set up sort of 100 million in the last A budget low. And this kind of found that is indicated also true for the company that abroad. So that's a, I think that's a challenge that we'll have to face.

 
Doug Johnson 

Thank you. Thank you. Addy, I'd like to turn back to you for an understanding of how AI factors into the work that you and your firm do with, with clients in both the public and private sectors. You mentioned during the introductions that helping them understand how to scale this type of innovation is one thing, but can you expand upon the work that you're doing specific to AI in both those sectors?

 
Adelina Cooke 

Yeah, sure. So the part of the business that I work with closely at Accenture is called applied intelligence. And as part of a broader strategy that Accenture has been pursuing the last, you know, six to 10 years is all in the new it's in technology integrations. And that's now 65% of our business. So it's really important critical component where we're getting, we're delivering to clients across industries. And what's really what's really key about this, though, is that AI is actually one of the alpha trends. It's kind of it's growing faster than cloud. It's growing faster than a lot of the technology trends that we, we help companies keep up with. And that's because it's, it's hitting so many different parts of the business. It's not just limited to the consumer tech that you see out here. But there's a lot of b2b components. There's a lot of kind of HR management Supply Chain Management components that AI is helping to make more efficient, and deliver better delivery to clients, but also just to the bottom line. And I think that's sort of again, back to my intro when we were thinking about scaling. That's all the things that government wants to write. That's, that's sort of what we're all trying to work towards is how do we deliver better services at a lower cost for citizens to and Accenture, spending a lot of time thinking about those things, but then also thinking about, okay, where's the human in in the This process, you know, within the public sector in the private sector, it's so key because a lot of the decisions that are getting made are going to have direct effects on not only consumers where they work and live where they receive their government benefits, but also in the workforce component. So, you know, our Chief Technology Officer Paul jority, wrote a book called human plus machine which really gets at what are the new skills that we're going to be seeing more and more talked about in the future? Because AI isn't just a technology, it's it. There's a lot of human in the mix and in the development in the deployment, oversight. And if we don't have the proper training in with throughout the lifecycle of AI, that's where we get into kind of the dicier areas. And so, yes, there are technical solutions. But I think what's really interesting about the OMB memo that that Lynn has worked on for so many months, is that it really does talk about the process to write it's not just about the technology, technological solutions that NIST might be leaving There's a lot of other components in here. And again, that's why it's critical for companies like Accenture who are working across industries and in the different kinds of applications for AI that were involved in all of the different areas to make sure that we're we're doing this right,

 
Doug Johnson 

right. It's more than its technical as well as some social consulting in a sense, right, giving them

 
Adelina Cooke 

Strategy. I mean, right. There's there's a ton of strategy involved. It's not just technology.

 
Doug Johnson 

Michael, most, many of your members certainly are large companies. Matt, you mentioned a moment ago, small and medium sized enterprises. From your perspective as a trade association, do you have a view on the large company versus small company perspective here? and benefits really, particularly to the small side of the industry?

 
Michael Beckerman 

Yeah. You know, I think, you know, particularly you're looking at AI applications in the cloud and the ability for small companies to scale. The benefits for smaller companies is absolutely enormous. You know, the data that goes into AI CS is very important and how that's, that's pulled together. But for smaller companies, you're seeing more and more the ability to, you know, through using cloud technologies to be able to scale on demand. And using artificial intelligence and doing that can make a small company look like and act like a larger company, and gives them the opportunity to become global and compete in the global marketplace. And that's really what we're seeing with a whole host of different applications here. And so I think it's very encouraging when it looks when you look at like that. Yeah.

 
Doug Johnson 

And sticking with this general point, a similar question back to you, Amanda, you mentioned that most EU companies or small and medium sized enterprises, so can you expand upon their role or how European policymakers are seeing their role with respect to AI? What?

 
Mattia Fantinati 

Yes, in the sounds that I've noticed that said most developed country adopted a strategy about AI. That is all the way to flex today. Social politician system So that's for for, for Europe, and Italy as well. We're a small medium company means manufacturing and handcraft. And so, really my role is in a sort of way to, to, to leader handcraft to create a collaboration between the master of uncraft and the artificial intelligence. So it's not so easy, but I think that's what we have to do it because the, the, the Euro strategy is, is, is focused on small and medium enterprise. And I used to be enterpreneurs had a smaller, various smaller and enterprise. So our first small enterprise very difficult to find out the resource. The funding to develop on the research on the innovation is, is not very easy. And also in Europe, but we have a sort of fragmented market because we are in 28 states man 27 I'm sorry, state members and messageboards reason we're trying to unify and in order to create a single digital market where it is easy to assess the owner of the knowledge and or, or the of the business and sharing our best practice. And if I consider that I'm going to include I consider the for the private investor in Europe or left behind. And just because we are the European markets composed by small and medium enterprise, so we have to do a lot of efforts in order to invest the EU. The European Commission want to scale up their business and wants to reach a target that's 20 billion of a private and public investor very year. Over the next decade, so we are invested a seminar next year we will Europe will invest in 1.1 point 5 billion of a public investment The European Commission wants to stimulate the business market in order to using public funding to leverage the private investment. So that's more or less our strategy. And if I, I really conclude that Europa is facing this sort of Saturn is this the sort of challenge in in two ways. The first way is to popping up the way for the partnership of the public and the private partnership. And and the second, the second way is for teaming up for investment. And the European and the European Commission is making available a lot of funding for the startups in the early stage. And so the commissioner appear for this this way this just to stop the founder 100 million for each step. So I think that's very important to have the small and medium enterprise tries to to foster the innovation and in a global market context.

 
Doug Johnson 

So we're talking about public and I guess, to large extent also private sector investment in this area of emerging technology. But let me return to you and and maybe talk broadly about what should be or what is the role of the federal government's with respect to the United States, in in overseeing the use of AI in the private sector? I mean, it's like it's out of a theme or approach that you develop these principles to guide the market. Can you expand upon the kind of a philosophical approach?

 
Lynne Parker 

Certainly, I think, at the beginning, the role of the federal government is to not get in the way. So I think certainly we want to allow innovation to foster and to make sure that it's being used in in ways that we can all benefit from but as we've already pointed out, there are many areas in which we need to have more oversight. I think, ai presents a kind of a unique kind of challenge which is that we have Lots of existing laws that protect Americans, many nations do from things like discrimination, it's illegal in the United States to discriminate. And so we all know that a company cannot willingly produce a product that discriminates or, or there are consequences for that we have a robust legal system to help enforce these, these rights and so forth. At the same time, I think there are certain uniquenesses with AI, and that we can't necessarily understand what's happening. So you can have a good intent for an AI tool to be used in a very productive, useful way. But if you don't know what's going on within the AI tool, or the AI system itself, you can't guarantee that it's behaving in the way that you would want. And so that then asked the question, well, whose role is it to make sure that we're protected from effectively what people like to call black boxes? Well, certainly state and local governments can do this and they are doing it In many cases, but then that hampers innovation at some point because now companies have to deal with all these different patchwork of regulations. And then that hampers innovation because print in every locale. So at some point, then the federal government needs to step up and say, Okay, we're actually hampering innovation by not having regulatory oversight, or a process for having some consistency. So that's what's so important. And what we're excited about with this memo that came out this week is the how that memo will establish consistency across all these different sectors that regulatory agencies have authority over to make sure that as they consider particular use cases within their domain, that if there's some application of a knowledge decision making tool, it's under the authority of one agency and a similar kind of tool under the authority of a different agency, that the federal level we have consistency across the board, so that now we're we're helping to protect your everyday person who's maybe on the receiving end of these tools, but it also helps the innovation ecosystem because now companies have some predictability in terms of regulatory approach. And it also helps agencies, frankly, we talked touched a little bit here about how there's some workforce challenges and experts challenges, expertise challenges, and the agencies don't necessarily have enough expertise either. And so each rather than having each agency try to struggle with how, what should they be thinking about as they consider these oversight of certain AI tools. Now, this provides a sort of a roadmap for how to consider that. And that will then help in so many different levels. So I think this is a good example of how it is helpful for the federal government to step up to provide this consistency to provide this predictability to help protect folks with similar kinds of tools across the board.

 
Doug Johnson 

Well, let's move from the executive branch then back to the lead. Legislative Branch and your boss, Svetlana, Congressman McNerney. What are his policy priorities? Would you say with respect to AI? And do you know to what extent does a country's values shape Technology Policy, in particular with respect to AI?

 
Svetlana Matt 

Thank you for the question. Yeah, I is a critical part of his work in Congress, both as co chair of the caucus and the importance of it to our district. And he's also the only member with a PhD in math and Congress, so incredibly lucky to work for a member who's so interested in the space in terms of his priorities, one of the areas that he really wants to see, I think the work of Congress and also, more broadly at the federal level, in terms of where we need to step up is workforce, both and preparing workers for the new opportunities and jobs that will be created by AI but also making sure that we are able to help workers whose jobs or skills might be displaced by AI and automation. Transition to in the economy and Brookings, actually, about a year ago identified the Stockton Lodi area, which is in our district as the third metropolitan area to be likely most likely adversely impacted in terms of the potential displacement. And so, you know, he really wants to make sure that we're helping communities like those in our district really get ahead when it comes to workforce issues. Additionally, bias is an area that he really thinks, you know, we need to be doing, setting some sort of guardrails and doing more on, I'm particularly looking at how we can increase diversity in the workforce. So the people who are the technology impacts are also at the table developing this technology, and also creating some sort of guidance or framework for how when this with AI systems and tools are being developed, how we should be evaluating the data sets that the systems are trained on any sort of testing, that should be happening beforehand and before the systems deployed and after the fact since there's machine learning over time, and it's changing. Thirdly, he'd really like to see robust r&d investment In AI, and I would also add that another area that he's really been focused on in Congress is to ensure that the federal government is increasing adoption of AI throughout the government to increase efficiencies throughout but also doing so in a responsible and smart way. And he's the author of the AI in Government Act, which is a bipartisan and bicameral piece of legislation. And it actually was favorably reported out of both the House and Senate Committees. So we're hoping that that piece of legislation continues to move forward. And it would do just that increase AI adoption throughout the help increase AI adoption throughout the government in a smart and responsible way. So those are some of his priorities. And then to your second question, in terms of the values, I think that you know, he's really eager to make sure that the US is leading in this space. So because in many ways, he believes that the country that ends up right being in a leadership role at the global level, when it comes to AI, its values will also be embodied in the technology. That is about And adopted throughout. And you know, he always points the examples Do we want the technology of a country like the US with, you know that his respects civil liberties versus a country that might be more of a surveillance state in terms of leading the way. So he sees that as a really important area.

 
Doug Johnson 

Thank you. Thank you. Sticking with one element of what you just said, labor workforce related issues. Addy, let me turn back to you and, and ask the question of what should be the roles of the public and private sectors, both ways that respect to ensuring that we have this robust pipeline of workers to do the the r&d as well as workers who can utilize AI in their jobs. So from both dimensions?

 
Adelina Cooke 

Yeah, so I think I kind of touched on this and in my earlier comments about kind of our human plus machine focus at Accenture, you know, within our own workforce at Accenture, we realize we cannot do the work we do for clients around the globe in every industry. Without a robust workforce that constantly is being new skilled, re skilled, making sure that they have the capacity to continue to serve clients in in areas of growth like artificial intelligence. So you know, we bet invest about a billion dollars a year and keeping our workers up to date. And also we recognize that we have a lot of kind of valuable lessons that we learned from training our own workers that we can pass on to our public sector. clients and partners, and also with community partners, and so we are very active. I know a big theme here at CES this week is apprenticeships and Accenture is certainly part of that conversation. Making sure that there are kind of new ways we're we're trying to reinvent how we think about scaling workers and bringing them in from community college programs non traditional colleges. You know Accenture is truly believes that the four year degree Like many companies, I know we're not alone isn't going to be necessary for the future. And for for a lot of the technology integrations and innovations that we're seeing. So our partnership program was actually born out of community college in Chicago. And it's continued to grow many times over the year and over the past five years. And we think that's the future and we're expanding that apprenticeship program to other cities like Columbus like Atlanta, cities, where, as Atlanta said, I know Brookings has identified as being kind of at risk and having a lot of populations that are at risk. And and we're trying to think, how do we get into community colleges and an out of the four year degree mentality and even beyond for year to year colleges and San Antonio, for instance, we're working with the local housing, sorry, local workforce development boards. I'm in St. Louis. We're working with local development of workforce board. So it's how do we get out of the kind of what we used to do and see About what what's the future and and I know that Svetlana is thinking a lot about that I know Linds thinking a lot about that. I know your companies are thinking about that. And, you know, we were trying to be part of the conversation and work with our technology partners at at your companies and and also with our public sector counterparts.

 
Doug Johnson 

Was Lynne alluded to earlier. Thank you It's a great response certainly resonates with with the folks we work with in the initiatives that we have, as you mentioned, here at the show at the association. But Lynn, you mentioned, you know, a long history with with AI certainly it's been in development for a long time, but in many ways, it's still a nascent technology. When in the progression of emerging technologies, such as AI, went in that progression is the right time to tackle some of these societal issues, bias etc, that were that many of you have referenced so far and is now the right time. Are we late? We too early in some respects, maybe this is a general question really for all of you to come. On who'd like to take that first?

 
Lynne Parker 

Certainly now, it's not too late. We, I think at some point, you have to be able to learn a little bit about the technology to the point where you know what it can be used for. But now certainly is the time and we are I mean, the Commodores conversation here today. I think it's clear across the board, we are embracing the importance of how we develop AI and how we use it. I do think we have to be thoughtful about this. I think we have to be careful and not reactive about it. I think we have to do it based on scientific evidence that that shows that certain policies work or not. And that's why I think it's important and it's a way that we can certainly collaborate internationally is in learning about policies that can help and help us understand how to use these techniques and so forth. I do think that all the collective efforts of all of these different directions will get us to a very good positive place. I think it's great that people are, are digging in and all these different angles of how to approach the the value system. I think it matters, it truly matters. It's actually written in an executive order that established to the American AI initiative that the President signed back last February. It's written in there, the importance and why it matters to have leadership in AI is so that we can develop AI so that it has the values that we have in in the nation and in many parts of the world, the civil liberties, the privacy, the values of our nation, we want those those to be part of the products that are being produced. And that's why the leadership the topic of this panel is so important. The the leaders of the world who are developing these techniques, techniques, can also at the same time, develop them to have the values that we hold dear. And so now's the time to do it. I think we are doing it collectively across the world.

 
Doug Johnson 

Any other thoughts on the timing of it? Please go ahead.

 
Mattia Fantinati 

I think that's the moment, the sooner the better, as quick as possible. I know, just an experience of mine. My party two years ago, it was an opposition and we were we were get prepared to, to, to win the next election. And so without that, if you want to rule it in the future, because that's what we are talking about, you have to know it, you have to design a scenario. So that's because I promoter research in artificial intelligence, the impact of artificial intelligence and the outcomes. It's really and I think the significant that because a lot of experts say that that the 30% of the jobs that we know today will be expiring 10 years 10 years is tomorrow. And one of two of the kids that now are attended the primary school will do a job that today is not exist. So I think that's that's why Europe not only Italy, all the world has to get prepared to this shift. And so I'm going to tell you what, Italy what what Europe is doing that we had to help in all the other Europeans to improve their, their skill in both technological scale. So there are very important because you need to live in Europe we have a huge digital divide, but also things is really important. This key that are complimentary to artificial intelligent human scale a like a critical thinking, management and creativity that I think that these key is a won't be replaced by the the energy are really important. And the second point is to protect the older workers that whose job will be replaced with the social protection in Italy we have set up last year. Sort of best income and, and also to invest in training and in education. Just very quickly I used to be the understood by the public administration and the internet we have a program but we have I, my role was to to digitalize The, the public to try to digitalize the public administration of Italy. We have, we had really two problems. One is that the average age of the civil servants and it's a it's a bit higher is 54. So it's very difficult if you are a 54 years old to use it, the innovation or the or the digital scale. And the second and that, that our public administration is a full of maturity skilled there and this we need the digital scale of two in order to digitalize the public administration. So with thunder to Nova employees isn't in an opportunity. We are going to hire more than 400 400,000. All the younger people in STEM is killed in a stem cell, which I think that is very important. We are looking for a younger, we're looking for STEM scalar. And I think that's it in this way we can, we can face the new challenges.

 
Doug Johnson 

Thank you, somewhat relates to what you mentioned earlier, Svetlana, about the congressman's district concerns at the local level. And in quiet, what kind of questions are you getting from your constituents in and outside of Stockton on this technology? And, and also with respect to the citizenry? What about the opportunity for them to participate in the policy process? Obviously, we just talked about one proposed set of guidelines here to comment on but let's talk about the constituents questions as well as opportunities to impact.

 
Svetlana Matt 

So in terms of our district, I mean, we've been really working closely with the city Stockton and the mayor's office and other stakeholders in the district as we've been developing a strategy, and the congressman's really looking forward to helping them implement it, and really, really wanting to make sure that the city is able to succeed in carrying this out. I would actually turn the second half of your question, I would flip it. I think that when it comes to AI, a lot of people I mean, probably if we just take a survey in this room, people aren't going to agree with AI is people certainly in communities across the country, you know, are might not know what AI is right. And they might not understand the impacts of AI until it's too late and their felt. And they might not even attribute that to AI right, but they might be more marginalized in society potentially. And so it's really critical that we reach out the policymakers reach out to their constituents to different communities and different populations around the country. And they really both help educate them in terms of in terms of what AI is an AI is not but also really bring them to the table to help shape this policy, conversation and I really strongly believe that it's, it's more important now than ever that we hear from different voices in terms of as we're shaping as we're shaping this policy. So

 
Doug Johnson 

thank you. And that's actually a good note to finish on, because certainly we have an audience here, some of whom may be constituents or citizens, not only with us from but from around the world. So I'd like to give the audience here as promised at the beginning, an opportunity for a couple of questions. There is a microphone that's been placed about halfway down the center aisle there if, if you're interested in asking a question, please come to the microphone so we can hear you. See, we have somebody approaching right there. We have time for two, maybe three depending on the question. Yes, sir.

 
Speaker 

Thank you very much for the so my question is to Addy. Coming from the policy side of things. Can you briefly highlight what are the programs Accenture has made in terms of scaling and also ensuring trustworthy, trustworthy, I need to

 

Doug Johnson 

speak a little closer to the microphones a little hard to hear.

 
Speaker 

Okay, sorry. So my question is to Addy. And what I said is coming from the policy side of things, can you briefly highlight what progress Accenture has made on in terms of scale in AI, and also ensuring trustworthy AI is properly taken care of? Thank you.

 
Svetlana Matt 

It's a great question. And so Accenture, there's I would say there's definitely two approaches. And I've mentioned this already. There's the strategy side. And so we are working with clients actively on ensuring that and they're looking at a new AI deployment, and they're evaluating risk, similar to how the structure that the memo for OMB lays out, you have to evaluate it for risk and then you have to make sure that throughout the lifecycle and at the beginning, before you even deploy the thing, you have the proper oversight and the proper consideration. you're evaluating risk, you're making sure that there are safeguards set up ahead of deployment or development so that when you're ready for a deployment, you can make sure that it's doing the thing you set out to do. You know, when you identify the business case, it's not always clear that it's not always clear that the the oversight is there. And the goals that you set forth in the beginning need to be properly stated, and the proper teams need to be there to to manage it. Now, the second piece is the technical piece. And Accenture has invested a lot in in different technical tools. For Model Management, Model Management is going to be a big part of any technical evaluation and deployment, but also evaluating disparate impact assessments, right? As as has been talked about in this panel, discrimination bias. It's illegal in the United States and a lot of the evaluation to make sure that you're not crossing those thresholds is good Going to be through evaluating disparate impact. And that can be measured. And Accenture has put a lot of resources into making sure that we can help clients learn to evaluate that risk.

 
Doug Johnson 

Is there another question?

 
Speaker 

Related to topic the global race for leadership in AI? And then there are many articles say how the Chinese government is I mean, progressing for the a I mean, development, and I guess a Taiwan list, I mean, scientist, he wrote a book about the AI superpowers, China Silicon Valley, and the New World Order and then he mentioned in his book how US and China in this AI race and how China's is likely to be the winner, etc. So what are your opinions about this? I mean, copo race for leadership in AI between US and China.

 
Doug Johnson 

Great equity up here. I'm not sure how well we heard the entire question. Yeah, it was more of a company of competitive positioning question, but

 
Lynne Parker 

it certainly and I heard you were referencing keifer Lee's book as well, I think one of the points that he brings up in that book is how China is very good at taking existing ideas and implementing them and putting them into use. And so certainly that is a strength of China. At the same time, I think we as a as the free world also care about exactly how these technologies are used. So we want to make sure that as we have the technologies, that we don't use them in ways that are inconsistent with our values and our nations. And so that's something that the viral memo that I talked about, though, and the memo that I talked about today, is is aiming to make sure that at the national level, we not only are leading Innovation. But we're also using these innovations in ways that are consistent with what the people want in the nation, and having the ability, I think, in the United States of having a very robust system of checks and balances, where if people feel that particular approaches are not fair, they can sue. They can. There's lots of recourses of ways to make sure that as a nation, we continue to create a climate that we all want to live in, and it's consistent with our values.

 
Doug Johnson 

Look, as always, is the case with these panels. We have more questions than we have time. So we have to end it here to keep you on time with your schedules. Would you please join me in thanking this great panel for their contribution

CTATECH-PROD1