EPISODE 5

Law & AI Ft. Carole Piovesan

May 23, 2023

About this episode

Welcome back to episode 5. This week we are joined by Carole Piovensan. Carole is Managing Partner at INQ Law, focusing her practice on privacy, cyber incident response, data governance, and artificial intelligence (AI) risk management. She regularly counsels clients on a wide range of matters related to privacy, cyber readiness and breach response, data governance, ethical AI, and responsible innovation. On this week's episode, we discuss what she’s hearing from clients around AI & law, use cases, points of concern, big opportunities long term, etc.

  • Google showing you AI-generated ads. How are internet ads going to change with AI?

  • AI Regulation discussion & how we predict it to unfold

  • Governance leaders & how they are leveraging stewards. Bringing people and processes together.

  • Fractional CDO/CAO/CTO - Caution against it or lean into it?

  • We also chat:

  • New York City cracking down on using AI to hire and recruit.

  • Guidance and regulations for AI

  • Bridging regulation gaps internally

  • What constitutes good data quality?

  • Country-wide regulations around research and development around models and AI

Transcript

[0:03] One, two, three, four. It was intended for the human to support the machine, the machine to support the human network. An AI means something different to anybody you've talked to, which is wild. This is AI or Die.

All right, so podcast episode four - how we doing? It's been a few weeks, there's been a lot that's been happening. I've been on the road, Reagan, you were on the road a bit as well. So just wanted to catch up. Brendan, you know, what's going on in your life? How you doing?

[0:27] Doing good. It is spring in Colorado slash early summer, so we are very excited out here. Gonna be doing some camping this weekend with the long weekend, so pretty excited for that. But yeah, overall things have been good, working on some interesting projects right now, and we're working on our product and getting that ready to push for the initial release. So just exciting times to be working on data and AI transformation.

[0:53] Yeah, the weather is good, it's springtime, things are growing and blooming like - yeah, a lot of stuff's growing in our company and there's a lot of stuff's growing outside too, so that's good. I've personally been on the road a ton, so I was in Cincinnati last week doing data ops and data storage and quality with large banking organization here. It was fun. I always loved getting in front of people and talking about what they're working on in their projects as well. So they're thinking through a ton of data mesh framework, which is fascinating. So like the methodology around data mesh - really how to structure much more product ownership for the folks who are building the pipelines. So it's fascinating. We can talk more about that, but it's good to be back home this week and agree it's beautiful here in Columbus as well.

Reagan, how are you? How's your week been?

[1:30] Good. It is actually episode five, not episode four.

Oh my gosh. Wow, we're having fun, guys. Yeah, it's good though. I'm in the office, it's nice to be home also. I feel like I've been in a time warp for the last two weeks. I was in Boston for the Open Data Science Conference, the East one, which was really fun. Lots of really good talks, lots of great workshops. Gotta meet a lot of people who I talk to online in person, which is always super fun. So yeah, good times out there. And then came back to Columbus for the Ohio Tech Summit last week, which is also super fun. Did a bunch of in-person stuff last week too, so I just feel like it's been two full weeks of like in-person craziness. So it was good to get some heads-down time this week and hopefully I can get out and enjoy some of the weather too.

[2:28] And you're being super humble, Reagan. Actually won an award. Reagan, what was the award that you won here in Columbus?

It was called the Trailblazer Award. It was awarded to 10 women in the tech industry in Ohio for their accomplishments. I'm super excited to be in the cohort that I'm in because a lot of women that I know personally or friends of mine who I've also worked with and partnered with in the industry over the years also were recipients. So yeah, honored.

[3:00] Congrats, so that's awesome. I'm really proud of you. And then you - you talked about your talk at ODSC. What was the topic there that you spoke about?

[3:07] Yeah, we covered building a capability roadmap and the different levels of maturity, accelerating maturity and different capabilities and organizations. And really just highlighted - it was great because a lot of folks in the audience who are kind of at the VP or management level really resonated with the message, which was - which was great - which is like how hard it is to demonstrate ROI from building up and improving and maturing some of these fundamental capabilities. It can feel kind of like an iceberg where you can only see the tip of it, and trying to explain the levels of dependencies and complexity underneath to make it real and show business value is really hard to do. And it takes a lot of skill and it takes a lot of balance on that bottoms-up, top-down approach. So like being really hyper-focused on providing business value but thinking about the fundamental, you know, technical needs underneath so that you can scale what you're building and not accrue a ton of technical debt. So talked about that for about 40 minutes and, you know, seem to resonate pretty well. Yeah, it was a good talk.

[4:16] Yeah, if you just remind me of when I was on a transformation team at a prior org - as part of our core transformation, we made sure to partner with Finance from the start because we knew we were going to struggle to justify ROI around a lot of these innovation efforts, even if it was aspects of robotic process automation or toying with new aspects of AI. Having that core partnership with Finance, the CFO, or somebody on their team to build those out together as opposed to trying to throw it over the fence and build a business case, I've just found this so much more valuable and just faster.

[4:44] For sure. Well, let's get into it. I know there's a lot of news and trends that are happening, especially in the past couple weeks. We have a few articles that we pulled up that I think it's important to talk to. I think one of those is - for the folks who are listening in, we're really going to focus in on compliance and governance today, especially with our special guest Carol joining from Inc Law later as well. So before we get into that, let's get into a bit of the news.

Something that Brendan we were talking about on prior episodes was how is internet ads going to change with a lot of these generative tools coming about? And something that just came out yesterday is now Google's going to work on showing you AI-generated ads in a way of if you enter a query like "show me skin care for dry, sensitive skin," it will automatically take that query and then generate images using that same exact language to point ads back to you in line with your query. So again, using your own words to generate the ads that you're seeing to make it feel that much more targeted, which is super fascinating and something where it's going to be in-line, instant, based on whatever you're searching.

[5:49] We all knew it was coming.

[5:55] Yeah, I think everything's gonna be more hyper-specific, everything's going to be even more tailored. Like that's what AI opens up for us. Yeah, which can be good and of course can be bad, but I'm not surprised to see that. I think at least something I'm noticing is I'm using ChatGPT, I'm also looking at Bard and some of the other like LLMs out there - is they you kind of bypass all the ad stuff, right? Like I can go and get information, query across the entire internet without seeing a single ad if I go through the ChatGPT interface, right? So I wonder how that's going to impact a lot of these like the blogosphere - I think that's a very dated term, but you know, that whole concept of "I'm gonna put content on the internet, I'm gonna get eyeballs, people are going to pay for those eyeballs, I'm gonna make money off of that." I think is going to be very different now because I can go - like recipes are the best example, right? Because it's always like all this extra spiel about their life story in relationship to this recipe from their grandma, and then there's five ads I had to click through to get to the ingredients, and then I get into the directions, right? Like that all comes into one query and I just say "how do I do this?" Boom, here's my answer. So I think we're going to see a very large shift in the commercial aspect of the internet, especially as these get more adoption. And then of course, I think all ads that we do see are going to be even more hyper-specific, more really impactful, or like designed to get our attention and have us click through.

[7:09] And who's right? Like is it - is there a future where I'm going to be paying OpenAI to recommend content that I put out there when people are querying through there? So is there a large shift of people using OpenAI and ChatGPT is that query set center, and then what is my way to get my content through there too and almost get verified in a way through OpenAI to use my content instead of somebody else's?

[7:34] Yeah, it's interesting because there's - we're getting introduced to this like "pay a subscription for this value ad," and we've always teetered on, you know, "you get nothing for free really, but they tell you it's free," and then you have to kind of live with all of the other elements of that experience. And so I think it's really interesting. I feel like a lot of consumers are a little fatigued on that experience, and maybe we'll see a shift to people being open to paying to avoid doing that. I mean, obviously there's a lot of platforms who have done that before, like a lot of the streaming platforms where you can pay to not see ads. A lot of them, you know, platforms like Spotify. I wonder if that this will start to head in that direction as well in terms of search.

[8:07] I think so. Just again, who is the highest paying bidder, or how do we essentially get people what they want to see there? And do it's all driven by that.

And just related to kind of the highest paying bidder, I also see that Meta was fined by the country of Ireland for a record $1.3 billion for transferring Ireland's people's data over to the U.S. So this is historic - this is the largest a company's been fined. The prior was Amazon for $746 million. But essentially, these large fines were preaching for breaching GDPR are starting to get rolled out, of course, and we'll probably go through layers and layers and months of litigation and all that. But essentially, is that the punishment for breaching GDPR rules that are being rolled out by the EU, which is fines? Can they do anything additional? Do you think companies will essentially want to listen? I think Meta being so large and be having this be such a large amount that they're being fined is them making an example of Meta for future companies who might think about breaching those rules and going around it too. So it's a bit of a, you know, "show everyone the punishment that you can go through - it's not gonna - it's gonna hurt you a ton from a cost standpoint" for a company as well.

[9:24] Yeah, and I know the theme of the day is compliant, so it's - it's relevant. I think for a lot of data and AI teams to realize just how much they need to plan and mitigate for this new risk around compliance from like regulatory bodies and like more introduced fines. Like the risk is now more quantifiable, I would say, as we get more and more of these fines. And like of course, this has been happening for some time, but especially as they're about like increasing in severity and impact, we have more of a compelling narrative on the data and AI side to say "this is why we really need to prioritize this up front. We need to design our systems ahead of time with this consideration," because the other feedback we're hearing from folks too is if you have your plan together and you can talk to it, you know, wisely with your regulators and you know your stuff, they're going to look for less holes, right? They're going to trust you more to be able to like regulate yourself essentially. And so I think that's where teams need to be more and more proactive, and now they have more and more of a business case to go with that productivity to say "why we need a design and prioritize this up front as we're doing this innovation with data."

[10:24] Yeah, I think it's interesting because it's - you know, a lot of the discussions right now are how do we set some of these guard rails up so that we can actually execute against it? I think like people are getting maybe a little fatigued from just talking about the fact that there needs to be regulations in place. I think everybody's kind of like under the same, you know, opinion for the most part that regulation is going to be a good thing and it's going to protect individuals from some of these systems that are being built and designed by these a few companies, quite honestly, at this point.

I think one of the biggest challenges is obviously the execution of that, and I think there's still like a lot of discussion around the paradigm to compare this like new wave of AI regulation to. You know, a lot of people have compared it to nuclear power, a lot of people have compared it to the internet. I know I've done that a couple of times. And so I think a lot of people are looking at like what precedent has been set beforehand, what rules do we need to make that are new that are not already covered by other governing bodies, and how do we - how do - how are we going to make this added an additive experience that we can actually execute against?

[11:42] You know, GDPR I think is a good paradigm to look at. I know it's a little - it's - it's different in nature, but like I kept calling it "The GDPR for AI" for the last like year because that's the best thing I could think of in terms of relevancy to folks I was talking to, and what kind of drastic operational changes that drove inside inside of companies. Like once GDPR was like out there, you know, companies spent a lot of money and a lot of effort to get all of their procedures in place and processes in place and risk teams up to speed. So I just think we're going to see that wave, and I know we'll talk about this later, but I think people are just waiting to understand what that looks like before pulling the trigger on a framework or on a specific approach, because they don't want to redo or uphaul any of that. So it's just really interesting to like start to see the execution side come out of the woodwork, even though that was on specifically a GDPR violation with Meta, but you know, we'll start to see more and more cases. And I think Carol will be able to talk to us about some of that too.

[12:48] Yeah, but for us Americans at a high level, it's all about the American Data Privacy and Protection Act, and there's been studies that have come out where companies are obviously prioritizing and wanting to stay in line with this. Of course, there's a mix of leaders and best followers who are going to try and get ahead of this, but then the majority of companies we're going to kind of wait and see in terms of what happens and be a little bit more reactionary. But even for those teams who are getting ahead of it and trying to think about how to assess their models and their model bias related to this, they're still estimating at least 40 hours of pure work just to document what the process of the model goes through and where the data is coming from as well to be able to start to comply with these. So it's a huge time suck for these extremely expensive resources to just document the process a model goes through, one at a time, let alone think about 50 models in production. So you'd have to document...

[13:35] Yeah, there was a lot of debate on whether or not - obviously with the - the hearing that just happened in the U.S. - you know, all of them were like, "Yeah, we want this - this industry to be regulated." But I think part of what their strategy there is is that they understand how hard it's going to be to regulate and how hard it's going to be to get - to get the operations around executing against that regulation. So I wonder if it's like, "Hey, if we came in here and said 'don't regulate it,' we would look bad, so let's - let's be on the good side of history and say 'regulate it,' just knowing that it's going to take a long time to get what we need to get in place and be able to execute against it."

[14:08] Yeah, and I think buyers care a lot more this time around with AI. Like buyers are very interested in the alignment and the regulation and the control, like more like nuclear, right? Like nobody wanted a nuclear plant in their backyard unless they knew it wasn't going to blow up, right? So I think it's almost like a market demand now that they need to satiate is "what's gonna - we're gonna build in controls." Because like that's Anthropic's whole - Anthropic is similar to ChatGPT, but they are built entirely around this control problem focus. We're seeing it on the marketing materials for ChatGPT. So I feel like they know they need to be upfront about that with this technology because of people's concerns at a global scale.

[14:41] It's interesting to see just the different steps that the different countries are taking. So for example, like Australia is still seeking input on regulations, whereas Britain is being much more pragmatic and they're at the space where they're planning regulations at this point. Same with China - they're a little bit farther ahead with their planning regulations. You have wider bodies like the EU and you have the G7 Summit still trying to put together input, and then even the US really still seeking input on regulations, reaching out to trying to crowdsource us and get input from each of the different companies and open source groups as well. And then countries like Italy and Spain still investigating possible breaches that could happen as well. So it's so interesting just seeing the different maturities or the progress that multiple countries have made, and of course, with most companies being across multiple countries, just like unraveling that and trying to figure out what they do about it too is going to take...

[15:49] Yeah, there's going to be a whole suite of tools and services that will - that will quickly become prioritized. It's so interesting because Brendan and I talk about this a lot - like we were building an ML Ops platform, machine learning operations platform, back in like 2017, and we were focused on risk, you know? Like we were part of that team that was building that, taking to market, and I honestly think they were one of the first tools to solely focus on it in the market. And everyone said it was like too early. And then I go to ODSC West in - in November, and it's all the vendors are ML Ops platforms. And since then, one of those vendors got acquired. So I was like, you know, there's a lot of movement happening now where there's been a lot of foundational like technical foundational groundwork that has been laid over the last like seven, eight years, even on explainability platforms. I just met somebody in Boston - her entire company does AI explainability, and, you know, they've got some really cool use cases in healthcare and in with the Department of Defense. But it's going to get more and more traction, more budget, more focus, more prioritization, which is sometimes what we need, you know? Like I'm the last one to love a hype cycle, and I actually just tweeted that the AI is going through its tech bro moment. But, you know, like at least it - at least it gets people's attention, and it's, you know, there's a forcing function to actually prioritize some of these critical elements that we need to have in place. And I think that's a good thing, especially for, you know, what we're talking about here, which could have, you know, drastic societal implications and drastic economic implications and - and all sorts of things that we - we need to have a handle on.

[17:25] Yeah, I think it's interesting too because we're hearing like the regulations that say are the stick, and then building trust with the organization, it's kind of like the carrot. Like a lot of data and AI teams are trying to avoid or, you know, afraid of this regulations or these like reaches or these fines that are going to come, but they're also actively seeking to more - I'd say like - positively - a building a relationship, building trust between the models and the users inside of the business. So you mentioned explainability, and that's actually an interesting area that we've been focusing on with one of the large enterprises that we're working with. With the model, they're seeing it's not only like a technical explainability because explainability has like "which features are creating this prediction?" There's like a lot of kind of more data science concepts that are inside of explainability, but they're also just looking for the context of like "where did this data come from? How does it transformed?" So really trying to open up the black box that is machine learning. So I think that's going to be a growing trend as well. So not only can I stand up to an audit from a regulator, like the stick side, but also on this carrot side of like "my users need to trust what we're doing to use it, therefore I need to be able to communicate effectively." Like "this is how we built the model, this is why we built it this way, this is what we've learned as we've built it" to really kind of line up with the business's understanding of the data and their expectations.

[18:46] Yeah, I compare this a lot to like when you're test-taking and you were able to like look up the answers and like let's say you're able to cheat and like you have all the answers and you can put that in and you can get an A. You know, the - you didn't learn anything in the process. And so with a lot of these models, they're predictive models, you know, they're kind of black box in some cases, and so it can be kind of hard to understand like what was the logic that got to that outcome? What - what features were the most important in concluding that prediction and why? And what do we need to change operationally? Like you can do a lot with that information, even on just like why the model is making certain decisions and why it isn't. And like the outputs are great - like we want the outputs, we want to automate stuff like that's great, we want that. That's step one, and I think a lot of teams will take that, you know, especially if you can prove over time that those outputs are - are good. But I still think there is this like deeper understanding of why the model is coming to the conclusions that it's coming to, and a lot of the work they're doing on explainability for these large language models - it's actually quite silly how some of them are coming to the conclusion that they're coming to, and, you know, it one - it's getting - it's - it's hard to determine how it's coming to that conclusion, but when they can - in certain use cases, when they've been able to do some discovery research on it, it's not intuitive and it actually doesn't make any sense on how it's getting there. The answer's right, but the way it's getting there is really weird. So, you know, I think that - that's another interesting concept too as we start to say like what do we care about with these models? How do we break this down? Like is the output good enough? Do we care? Do not care? And I do think there's going to be a bigger emphasis on explainability, transparency, not to your point, Brendan, like trust - the - the stick, but also like what can we learn? Like what patterns did the model learn that we can learn and apply and make changes to based off of that? So I think that's fun revolution that we're going to - to see over the next couple of years.

[21:14] That was another big conversation that of those U.S. hearings was the focus on impact on workers, especially hiring in the HR team. So we're leveraging AI tools, so that's where you see even like cities like New York rolling out compliance requirements for AI bias specific to HR teams. So HR teams are using obviously large - large algorithms to scrape through resumes to do job postings and understand who is coming to our company. But due to their lack of understanding of how the model actually works, a lot of companies are having them take a pause on the models that they're using because that point the HR team themselves do not understand what is going on inside the model that's driving the outputs and ultimately making human decisions around who to hire and what to pay and where to even post jobs based on that. So a lot of a pause and are really thinking through as an HR function - folks who may be less close to understanding how AI works or having to start to learn that. We're hearing that a lot of review.

You know, today with my product guy hat on, I would say it's a good time for us to be responsible with AI functionality as well, because it does open up so many wonderful possibilities, but I think we need to prioritize with this feasibility around like the unexpected damage of things like - I think it's bias are very straightforward, but even just like using LLMs to give wrong answers to things like the hallucinations problems that we're seeing. So overall, I just always want to bring that up is like we should focus on using AI in the streams that we know we can like trust and validate. So like "write me a blog about this topic that I already know," you know, because I can go through and like validate the response that I'm getting from LLM, right? Also making sure we focus on like - I've even heard enterprises say like "we avoid all high regulation use cases for AI because we don't even want to deal with it yet," right? So I think having that caution and that, you know, awareness is going to be very beneficial for people using AI today because we don't want to create that fatigue of like "oh, we can't trust this stuff," right? Like we got to wait until it's ready for the problems that we're teeing it up for. I think if we're cautious, we'll help ourselves in the long run.

[23:15] Yeah, and I think AI is looking very, very, very different for the enterprise than it historically has, and I'll - I'll explain that a little bit. So as I mentioned, we've been helping companies for the last, you know, seven, eight years - enterprises - with machine learning, and what that looked like historically was hire data scientists or we hire an external group to come in and build a model off of our data. So they're building models - these machine learning models - which are maybe a little bit more simple from a technique perspective. You know, they might be using something not as complicated or convoluted as, you know, neural net. And now we - we're in this - we've seen this transformation of advancements in this transformer - no pun intended - the transformer technique, which is unlocked a lot of use cases around these large language models that make it more usable to the everyday individual from a productivity perspective, all the things like that.

So you started going from these early adopters of AI from an enterprise perspective on "we will build it, we have data, we will build our own models, we will deploy them, we will manage them," and it's kind of an enclosed ecosystem. Then we start - started to see this transformation happen in the market where people could build these models that could actually be used by multiple companies just by feeding in their data specifically. So now you don't have to design and build and deploy and manage the model yourself, but you still have to curate and get the data ready - ready to utilize and leverage that model, and then you have to do something with the outputs of that model. So now we're seeing this kind of shared responsibility of usage for a model - like who owns the responsibility of the outputs being leveraged for X, Y, or Z? Who owns the responsibility of the improvements or transparency or auditability of those models? Like it kind of went from this build versus buy mentality, which I think will still continue to happen, and then you have these massive companies like NVIDIA, Microsoft, you know, they're building these suites for - for the enterprise to - to leverage their own LLMs. You've got Mosaic - it's also doing that and helping companies build their own models. And so it's super fascinating to see this transformation happen from "hey, we're going to build these models internally, we have ML/AI platforms" versus "there's this external model we're interfacing with" versus "now the platform that we leverage will allow us to interact with certain layers of abstraction now with these models and fine-tune them and make them custom to us." And I think no matter where you sit in that camp, you know, which camp you sit in, you - you have to care about the things we're talking about. So it used to, again, just be those companies that were doing it themselves that had to care. Now everyone has to care, or you're not going to leverage the technology. And so I think that's been such a fascinating shift that has happened in the last six months.

[25:54] And that's where I've been reaching out to a lot of governance leaders to understand just how they're approaching this at their org, and a lot of these are small but mighty teams - inherently one to three to five people total on the central governance team. And a lot of them, especially in the smaller to mid-sized companies, are really leveraging stewards. So like a federated steward network of a little bit of both technical and business stewardship. So we have HR teams who are leveraging models that they might have brought in from the outside in their leveraging - who is a business steward that lives in HR to really speak to the use case and the value and the decisions we're ultimately making out of this? But then on the technical stewardship side, it's integrating with the IT teams, the folks who are right at the data warehouse level, typically point to "what is the quality of data in terms of completeness, validity, accuracy as well?" And then these governance leaders are having to merge the business stewards and then the technical stewards together to ultimately understand the use cases on the business side and then really the stream of data and the quality of that data that's feeding into - which I thought is fascinating. And across the board, it's not like these are full-time stewards either. These are fractional data stewards. Like very rarely in these average enterprises we see - we very rarely see like full-time dedicated data stewards. It's usually a portion of their time - 20 to 40 percent - that they really have to monitor and stay on top of. But you have people who are applying a portion of their time across different domains of trying to come together as well. It's bringing the people and the processes together is a bigger problem that I'm hearing from a lot of these governance leaders too.

[27:13] Yeah, I mean, just think about the journey that you just mentioned on the data side. I mean, let's rewind back to like 2011, 2012 - like the big data movement, right? Everybody was obsessed with big data. They're trying to figure out how to architect the mechanism to store and use and leverage it. And then we went through this big transformation of usage, which rewrote some of the requirements we had on these systems that were holding data. And just now, you know, like 11 years later, were at this concept of data stewards and caring about data quality and like the data governance component. I mean, we always cared about data quality, but the data governance element of that - I mean, has really been a thing that has blossomed over the last like five years. Chief data officers just started getting their own budget. I mean, like that's how slow a lot of this stuff moved. And now we're seeing this kind of rampant momentum for utilizing and leveraging AI, which is great, and I think it'll speed things up a lot. But I just want to remind everyone like what these things look like historically, and I'm not saying this is identical to that movement. You know, I'm not even saying it's identical to the cloud migration transformation that companies over underwent. But I - I'm starting to hear from a lot of people that it is different and - and like how much different, I think still no one knows the answer to that. You got to think about what these shifts do - they shift people's jobs completely, and that takes time. So, you know, I think it'll move - I think there's a lot of people motivated to make it move. There's a lot of money to be made in this space too, which is why I think it'll move quickly. But it does take time.

[29:24] Yeah, and I'll say to - I don't have good data behind this, I just have what I've seen in clients - but AI seems to be moving faster than data did. Data seems like decades long, and AI - and I know AI has been around, but I'm just saying the amount of use cases that pop up inside like enterprises - because, you know, even a couple years ago I was like, "Well, our enterprise is using AI at all," and now we're seeing like - I was on a call today where there was like three or four new use cases that we heard about, kind of like word of mouth inside of a very large enterprise manufacturer who would probably be further down on the maturity curve even a couple years ago. But now they're talking about a lot of ML use cases. So I just want to kind of correct that mental model for folks sometimes when they're like, "Oh, well, enterprises aren't really using ML," and I'm like, "Well, we were talking about getting first model in production five, six, seven years ago, and now we're talking about a lot of models and productions inside of the less mature companies historically." So it's moving pretty quick from what we're seeing.

[30:15] Yeah, we used to have to fish for use cases - like get clever on helping people understand how to identify them - and now it is speeding up quite a bit.

[30:21] Yeah, so that's where I really feel for these governance leaders because they're dealing with obviously a huge uptick in use cases and what - how folks are leveraging data, but then multiple compliance frameworks that they're having to fit into, especially in hybrid on-prem cloud environments as well, and like making sure to align to those regular regulatory compliance requirements too. It's growing on both sides, so I really, really feel for them.

So that tees up another topic that I want to have a quick debate on, just five minutes or so. Something I've been hearing a lot about just in the past six months or so is a lot more fractional CAOs, fractional CDOs, fractional CTOs in that way. I think it's very interesting. It's essentially companies struggling to justify a full-time higher from that strategic sense, but still wanting that external insight that they want from a CAO around "what are other companies doing?" So talking to people who are doing this today, it seems like it's gaining in popularity. Wanted to get your guys' opinion on what you think the pros and cons of a fractional CAO or a fractional CDO may look like, and is it something that companies should really lead into, or is it something that we should caution against based on, you know, there's going to be inherently gaps that are missed when somebody's fractional, no matter what?

[31:44] Yeah, this is really interesting. I feel like because I have heard personal stories of so many CDOs talk about their journey and their company in terms of influence and budget over the tenure that they've been there, that whole thing is fascinating. Like if you go to any company in different industries and you look at the CDO and what they're responsible for and who they sit under and what they're focused on, it looks so different. I mean, there's not a lot of consistency across those roles. We were just having a great conversation about this actually a couple of weeks ago when we were on with one of our - a friend of mine, but she - she mentors a ton of CDOs and she's been a CDO for a number of years, and she was just talking about this. We were like, "Okay, let's talk about the CDO function. Let's try to get some good consistent attributes about that," and she was like, "Good luck." Like you - you need different buckets of CDOs, different types of CDOs. The CDOs that live under IT and the CDOs that live under Finance. And so I think what's so interesting is that there is still this level of inconsistency, and the major thing that they - they struggle with is buy-in and getting their - their leadership to, you know, on their team, making - making decisions with them. And I just don't know how a fractional CDO is going to be able to get that much leverage besides maybe being a quarterback on some of the tactical strategy for the company if they're willing to adopt it. But, you know, I think she had mentioned - Brendan, I don't know if you remember - remember the different roles she had mentioned that a CDO goes through? I'll rattle off a couple, but one was like a traffic cop, one was like a policy maker, a lawmaker. Do you remember the other ones?

[32:37] I think it was like bailiff, so it's like somebody who like polices, somebody who like enforces, and then somebody who actually writes the policy was kind of like the main. And then there's the last one I think was something more transformational, like an actual like leader level. You know, it was like a change - like change mate, change maker, or something like that. So that point just being that there's a lot of roles that you have to take on. One of the biggest challenges that they have - it's not necessarily like, "Okay, we've got to do this data mesh transformation" - it's the cultural shift that the company is going through, and unfortunately they're responsible for facts because they're the most knowledgeable about how to make that tactically happen, and they know that it's such a cross-functional role that there's a lot of buy-in that needs to happen and there's a lot of proving your worth that still needs to happen. So anyway, those are just kind of my initial thoughts about a fractional CDO or CAO. I just think from a tactical perspective it could make sense for companies, but in terms of that cultural shift and like getting it to be the fabric of a company and not just this tacked-on like additional thing that they're trying to leverage, that takes a lot of transformational work from within and a lot of cultural shift.

Hear it if you're leaders in that way.

[34:49] Yeah, yeah, and I think it would be hard to be a fractional CDO and do that cultural change, get that buy-in, play the political pieces that need to happen to really get that budget but also get the buy-in across the organization. I think it's also going to be interesting to see how that role evolves as data becomes more - data and AI and analytics become more and more paramount for like the IP of a company, because it seems like it'll become more and more like the CIO, CTO-level role where it is critical to the success of operations of the business to have that level of leadership and representation. So I think it'll be really interesting to see how that plays out over the next couple years here. I do see some perks to it in terms of you inherently reduce your costs in the short term, and you get that immediate external insight as well - "what are other companies in our industry doing at this level?" and then "how can we help educate the rest of the leadership team, the rest of the C-suite, around what's possible for our org, especially as we're starting to identify use cases early on?" I'm looking forward to seeing people who have lived through multiple fractional CDO roles as this kind of emerges too. But it sounds like you both are on the side of not for it - it should be a full-time employee?

[35:42] Yeah, I mean, I don't know much about the success - maybe there's - maybe if I looked into it further I would change my mind, but as of right now I just don't see how - at least a large enterprise organization, and maybe this is for like smaller orgs so that are already bought in, and for that I could see - but for the large enterprise organizations, I don't see how that could - that could work.

[35:59] Yeah, I agree. I don't think in the enterprises that would be as successful, but I also know that there's a pretty short like average time for a CDO role - I think it's like a year and a half or two, something like that, for valuable leadership role that seems very short. So maybe fractional CDO does help to help find the right candidates and things like that to pull into that full-time role to make sure they're a good fit. So, but yeah, overall I think long - long game, that's not going to be the best solution for a lot of enterprises.

Makes sense.

[36:26] All right, well folks, we have Carol from Inc Law. She's a managing partner there. I'm also co-founder of Inc IQ Consulting. I've now known Carol for about a year, a little over a year now. In fact, we are having a lot of these conversations at the Data Connect Conference last year and taught and touching on a lot of the regulation space too, which I think is just really funny that we were really full-blown on that conversation, and now look at everybody paying attention to it. So Carol, if you would just kind of give us a little bit of background on yourself, and then we can jump into some of the - the questions we have.

[37:07] Absolutely. So thanks so much for having me. It's a real pleasure to be on your podcast. I am a lawyer by training. I have years experience in litigation, but then in the last seven or eight years transition to really focusing on the implications of artificial intelligence. And this was before we were looking at actual law, and we were talking more about what the principles of responsible or trustworthy AI would - should be. Since then, as you fast forward into 2023, we're now almost overwhelmed by the different kinds of guidance and laws and different regulatory proposals that are being made to put guard rails around high-risk AI. So lots to talk about, but that's generally my background. Thanks again.

[38:05] And it's - it's very relevant, especially in this Episode Five where we were talking a lot about the regulations that are coming out of different countries and even the hearings that the US is having to. Could you give folks just who are less familiar with the overall topic - specific to the US - what are some of the recent activities at the government level that you've seen from a regulation at a state level standpoint too that we should keep in mind just as we look ahead over the next few years?

[38:26] There's so much happening, so - and - and it really depends on what state you're in, what sector you're talking about. There's so much happening. There's really good guidance coming out from various regulators like the EEOC that's providing guidance on the responsible use of artificial intelligence in the labor market process. So whether it's hiring, recruiting, it might be promotion decisions, some really important guidance trying to limit discriminatory practices by algorithm. We see the same thing in insurance, in financial services. We see more and more chatter about this in the health care sector, and it really sort of breaks it down into what existing laws are in place that can be used to protect civil rights from in the context of AI, and then what new laws need to be put in place.

So for example, the New York City level, we've got Local Law 144, where there is guidance - there's - it's a law now, it's an ordinance that requires a bias assessment for AI systems used in the hiring, recruiting process. That's really - that's striking because it's at the city level, but it also goes to show that there are distinct concerns around perpetuating discrimination by algorithm, which is a hot topic and has been a hot topic for many years, and it's an important topic to deal with.

[39:40] And then you see the new - so that those are - that's an example of sort of a sectoral law, taking existing principles and trying to implement them in more of a tech-focused way. You obviously have historic laws to do with housing, fair housing, Fair Credit that are being applied in the algorithm context. And then you have laws at the federal level that are proposing sort of accountability of algorithms - most notably the Algorithmic Accountability Act. That's a proposed federal bill that would - that is specific to this technology and - and suggests specific compliance measures for the use of high-risk AI systems as well. So you really see guidance coming out from regulators, you see industry-led guidance around trustworthy and responsible AI, and then you see new laws coming out that are specific to AI trying to put in place appropriate guard rails for accountability.

[40:23] What a mess though, right? Because I try and think about if New York City is coming out with specific regulations, companies are obviously hiring for folks within New York City but then also outside of New York City too. That just the thought of teams having to think about "okay, who are we hiring, where are we hiring them from, and then how do we align with regulatory regulatory compliance at the city, state, or federal level too?" It's - it just seems so daunting to me. I wonder if you're seeing hearing a similar adage from other companies?

[40:41] Absolutely. I mean, not only that, but you also have a smattering of privacy laws that are coming out the state level as well. So you have certain laws that are governing the nature of personal information, you have specific state-level laws that cover breach notification in the context of a data breach, and now you have fragmentation of or potential fragmentation of laws related to the use of that data through artificial intelligence and guardrails put in place for AI.

Not to mention, you also have a number of different processes that aren't yet anything formal, but they certainly indicate that there's more to come. The White House came out with its draft Bill of Rights on AI and met most recently met with some four of the top CEOs in AI. Most recently, actually, I think just today put out a request for input on how to establish more accountability in AI, has put out good guidance on the use of AI in the public sector. And then you see a number of different federal agencies - National Telecommunications Association, or sorry agency - that has also come out seeking guidance from industry and other interested stakeholders on how to establish accountable AI.

So if you're sitting back as a business and you're looking forward thinking "how am I going to manage all of this stuff?" it's completely justifiable to feel overwhelmed.

[41:53] So do they come to you often and ask you like "what should we do?" Starting places for them?

[41:58] For them - I - I'm getting that question all the time. "How do we make sense of this - this landscape? What do we do? How do we comply?" And this is only in the US that we're talking about. Let's leave alone Europe that is putting forward its own prescriptive EU AI Act. So what do I tell them? I - I say start with "let's look first at what do you have in place regarding data management and data governance?" So let's take what you have, let's talk about what your data ambitions look like - "what are you hoping to use or leverage this data for?" - and then let's bridge the gap between what you already have in place as mechanisms to manage your data with what your ambitions are in the next - you know, it used to be three to five years, now I'm saying the next one to three years because things are moving so quickly.

It's really important to be to take what you already have in place and augment for where we see consistency in regulation. And I'll take one second to just say where I see consistency. I always break it down into three buckets. So the first is consistently we see oversight over the data sets. So there needs to be - in here you're really thinking about privacy, you're thinking about data security, and then you're thinking about data quality. "Is this a properly constituted data set, and is the data that you're being - that you're relying upon, you know, at risk of in some way perpetuating bias or harm that is not defensible or justified?" So that's sort of bucket one.

Bucket two is then really looking at the composition and robustness and safety of the model itself or models. So "do you have appropriate oversight over those models to be able to validate that you are - you have created professionally well-constructed, secure models for the intended purpose?"

And then third, you're looking at the outputs and "do you have controls in place to monitor those outputs for higher-risk AI systems?" And we can talk about that in terms of how to determine high risk, but "do you have those controls in place that will help your organization in ensure that the - the systems are being used as intended and that there isn't sort of downstream harms caused by data drift or some other type of interference with the model?"

So that's typically where I go, and the advice I give my clients is "identify your data ambitions, know what you already have in place, and then let's work across those three buckets to augment your existing data governance and data management infrastructure."

[44:46] And that's where in that first bucket I'm tangibly starting to see teams do almost like a stamping where it's a silver, bronze, gold level of certification of "this data set has been checked for data quality, we're monitoring this accurately." I wonder if you're seeing something similar just on that first bucket - the data quality side?

[45:00] To an extent, yes. I - you know, we're seeing - in fact, what we see is a lot of discussion about "well, what constitutes good quality?" There are standards out there, but we do get a lot of discussion, particularly when you're looking at - you're trying to - you're trying to assess your system for bias, you know, looking to those standards and those benchmarks as to what constitutes an appropriate data set and what constitutes an appropriate methodology for querying the data set to assess bias. There's - there's still like - we do have some good guidance out there, at the same time there's still a lot that needs to be determined.

So, you know, more and more I see companies reaching out and trying to get more assistance with that assessment. Sometimes an external assessment is required by law, and that's what we see in the New York City context where they're specific about independent audits. And sometimes it's not, and so you've got the company really trying to figure out "do we have the internal capacity to assess our models and our data sets, or do we need that external assistance?"

[45:46] That's where that - yeah, I was just going to say like I think there's - there's so much conversation happening, which is great, and I think this first step is obviously education. But I think a lot of companies are maybe overwhelmed with the implementation aspect of this. Like "okay, we can set our policies and standards all day long, we can say which, you know, frameworks we're going to leverage and utilize, and then - then we were going to put it into motion to make sure that it actually works."

What are you seeing in terms of companies thinking about that strategically? Like have you seen them leverage, you know, external parties to help them do that and get that into motion? And then a second part of that question is like what do you have to say about the company is that say they're not ready yet? If you think this hesitancy of like implementation is why they're saying they're not ready yet?

[46:26] So absolutely. We - on - on your first question, it's both, right? There are some companies that have really strong capabilities internally, or they have other feedback loops, whether it's through external consultations or stakeholder engagement, that give them a degree of comfort that they've sort of - they've done the appropriate oversight to their not only their use case ideas, but then also the implementation of those ideas where they - they feel like they're in good hands. So you see both across the board, and I think that's - you know, that makes really good sense. There's - there's - this is a very maturing space, and so right now it's still quite immature, and we've got companies that, you know, in some cases are more mature than those external assessors. So they're doing it on their own. That's number one.

Number two, you know, when they're over - the - the notion of being overwhelmed and not really knowing where to start and censoring sort of feeling like "well, I feel like I can't do nothing and I can't do anything because no matter what, it's - it's just too much" - what we always advise is to start with that gap analysis. Don't worry about how mature you are as an organization when it comes to artificial intelligence or with when it comes to your data practices - just start with the inquiry: "Where are you? What do you have in place?" Almost every single organization has some good capabilities in place that you can - that you can use to augment. So for me, the - the message to all of my clients is always "start with what you have, let's assess what you have, let's assess where you want to go, and then we can talk about the road map to bridge that gap. But don't get overwhelmed that you're not there yet because most companies aren't."

[48:02] Yeah, I remember when we had this panel discussion last year, we talked about this idea of like proving - like being able to prove in these different scenarios because some of these models are super black box and some of them are considered proprietary. And so, you know, they can - they can put all of these like levers around it to say "hey, yes, we are doing this responsibly, hey, yes, we are tracking for X, Y, or Z." Like have you seen any adjustments since we talked last year on how companies are thinking about that, or how even regulators or - or governing bodies are thinking about proving - like getting to that point of execution and actually forcing them to prove if there is litigation or if they are getting audited?

[48:39] Absolutely. So I think the number one telltale sign that things are changing is the fact that you see - you can hear more mature discussions within companies across different departments about this particular issue - "how are we - what are we documenting? How are we documenting it? How are we retaining it? For how long?" So it's a maturity in the conversation, which I align to sort of the data literacy piece, right, that I feel like there are - or I can see that different departments that don't typically get involved in these conversations to do with data analytics and the use of data in - for sort of business objectives - they're now coming to the table and they're coming more with more maturity. So they know they have good questions to ask, they understand the lingo, and there's a mature conversation about what's happening.

I see that also in the regulatory space where in the regulatory space we have a number of different regulators that even - or - or government agents sees that even if they're not providing direct guidance, they're asking good questions. And - and that's just as important because a year ago we didn't get those kind of questions from those agencies. So it's important that they're asking good questions and they're trying to make sense of the complexity as much as any business - "how do we harmonize standards, approaches, frameworks so that we don't hinder innovation but we also don't perpetuate harm?"

[49:51] And that's a great point you make about it seems like regulators are catching up. Historically I've heard the adage of "we as companies, as business teams, struggle to wait around for regulators to catch up." It seems like regulators were always lagging behind what was going on from an innovation standpoint inside of a company, so teams were left to try and think through their own ethical structure that they want to stand up in order to properly take care of this and manage it ahead of regulators coming out. Do you see - just to your point - do you see an increased pace from the regulators that are keeping up and staying more in line with the problems that the companies are working on?

[50:19] I do. I do. I think though, to your point, the work that businesses have done and organizations have done to establish their own frameworks - all of that hard work is paying off in that they're helping - whoever has done that work is really shaping what the policy discussion looks like, and they're doing it from almost an evidence-based perspective. So they have already started to operationalize systems that have caused discussions about ethics and appropriate monitoring. So that's - that's really important because what the regulators are asking for is information about "how to," and what organizations who have already done this hard work are able to contribute is exactly those points. They can tell you "this is what we've done, or this is what we're doing. Here's what we think is working, right, or here's what we would encourage you to avoid." And that's really useful feedback.

In fact, if you look at the UK process - so they recently - the UK government recently in the end of March, I believe, they tabled their white paper on innovation that really is standing up what they call a pro-innovation approach to artificial intelligence regulation. And what they're proposing is to equip regulators with principles that are relevant in the sectors that they regulate and then work through various different mechanisms to hone in - you know, the - the extent to which these principles are being operationalized and how they could potentially shape regulation down the line if that's deemed to be appropriate. And so you can see it's interesting that what's happening in the U.S. context is transcending in a way to other jurisdictions that's adopting a very similar approach.

[51:54] What about the conversation that's happening around the research and development? Like I've heard a lot of conversations saying that they want to regulate how many - like they want to regulate the industry on developing and designing more sophisticated models that can do more. What have you heard? Because I know we talk a lot about enterprise usage of AI and they're - and building of models, but a lot of those models are pretty simplistic when you compare it to the models that are being developed by by organizations like OpenAI. So what are your kind of thoughts or what are you hearing in terms of regulation from like a con- like a country-wide perspective on continuing the advancement in research and innovation on the model techniques that create more sophistication?

[52:18] It's very much along the same lines where there is a genuine interest in funding and enabling research and development, but doing so within a framework of responsibility. So - so this is - and - and we can see this in some of the guidance that's come out of not only the White House but other federal agencies that is supporting the use of artificial intelligence in the public sector, is supporting a number - I mean, you had the White House put out an announcement just last - last week or the week before reiterating huge investments in research and development partnerships. All of that is, you know, I think not only to support the - the actual advancement of this technology, but also to help inform where there are distinct risks and harms, and then ultimately that will inform the policy discussion, the regulatory discussion as well.

So, you know, the research and development space - we saw the open letter are signed by thousands of different leaders in the space of AI calling for a moratorium on large language models, and, you know, a six-month moratorium on - on the development of sophisticated large language models to allow - effectively allow the policy landscape or environment to catch up. And, you know, that hasn't quite taken on, but instead we've had federal agencies and the White House come out and say, "Well, you know, we want to make sure that this innovation, this research, is happening, but we want to give clear guidance or clearer guidance on how to do so in a trustworthy manner."

The big issue is enforceability, obviously. So, you know, any type of self-regulatory approach or - or, you know, voluntary approach to a framework - there isn't the same level of enforceability. We have seen the FTC, for example, come out and say that if you're not building algorithms that are using data appropriately, there's a risk they will shut it down. They call it "algorithmic discouragement." And so there's - there's a distinct sort of enforceability risk there for companies. There could be associated fines, but really the big issue in this case then becomes one of enforceability.

[54:29] That's what I was just going to ask you, Carol - is the majority of kind of pain that companies feel through not following these regulatory compliance requirements - is it fines basically where that is the punishment for not following these regulatory structures that government bodies or city bodies are putting law?

[54:43] Well, to an extent. I mean, there isn't - we don't have a ton of fines associated with new laws related to AI. There may be fines associated with the sort of discriminatory practices that are caused by algorithm, as I said, from existing laws that are being applied in the AI context. So that's that, but that would have been around well before AI, you know, became such a hot topic.

The other big deterrence, though - certainly reputational harm. So the use of a system that is perpetuating harm or discrimination or that results in some form of harm - it could be physical, it could be psychological - you know, we've seen examples of - of OpenAI chat bots that are - the ChatGPT that's being used to, you know, incentivize somebody to commit suicide. That's a very, very negative real harm, and it has reputational consequences for the company. And there's also a chilling effect on the increased research and development for the technology itself. So the more that we see these negative use cases, the more we - we fuel public distrust in artificial intelligence. The harder it will be to allocate dollars to this technology and to incentivize adoption because it becomes stigmatized as something that's very negative. And that really isn't what it should be.

So there's - there's a collective benefit to - to rolling with trustworthy AI and putting in place principles and practices that really try to mitigate harm and to increase trust so that we can use this technology for the benefit and not and - and really try to minimize and address that public trust issue.

[56:24] Just related to that, what are the common questions that you're seeing regulators asking of companies? Because I feel like I'm a bit - we talk about regulation and abstract, but like what are the real questions that you see right now these are struggling to answer? They're asking?

[56:34] Yeah, they're asking questions about the how. So what - what types of accountability mechanisms should we put in place? What types of accountability mechanisms have you found to be effective? Why? What are other states or other, you know, companies in your industry doing that you think is helpful? What do you think is not helpful? So, you know, what I mean - they're really trying to get into the specifics of "what do we need to do to bridge the gap between our existing laws and our concerns with the use of artificial intelligence, and how do we do this in a way that allows our companies to continue to innovate without being harmful?"

So it has to do really with the accountability mechanisms and the how-to's. "What do we need to do to make this - to make our regulatory environment for artificial intelligence a little bit more harmonized and seamless?" Give us that advice.

[57:20] I have a kind of larger, maybe opinion-based question for you, just kind of thinking about some of the general concerns that people are starting to sprout up. So like regulation could make it harder for companies to participate, and so there's kind of this general conversation happening that it's going to be a select few who can afford to participate, who can afford to put those guardrails up from a development perspective, even - even for a usage perspective. Have you seen any of those conversations start to happen? Like if it's over-regulated, you know - think about - I think a lot of people are drawing parallels to, you know, medicine development or - yeah, even nuclear energy. You know, what - what are your thoughts on like the monopoly that could form from over-regulating the ecosystem as well?

[57:54] There are definitely those conversations happening there. That's definitely raised as an argument against over-regulation. You know, I actually think one of the biggest challenges that smaller companies are going to face in trying to develop AI is trying to access good quality data sets in order to be able to not only develop but then ultimately to test their assist - or train and then ultimately test their systems. That is a huge area when we think about the data monopolies of some of the big tech companies and the big tech players. I mean, that's - that's a significant area of discussion for sure.

Over-regulating - yes, it is much easier for the larger companies to comply with regulation. At the same time, they do face more of the scrutiny. Smaller companies will not face necessarily the same kind of scrutiny or - or as much of it. So - so it sort of plays both ways.

I think the thing with over-regulating is - or - or where or over-regulating becomes a distinct concern is where we apply the concept of regulation to the technology regardless of the risk-based approach. So if you're just saying, you know, "once it - once it - it fits the label of artificial intelligence, now it is subject to regulation," I don't think that's the point. The point is we're trying to prevent specific harms from occurring. So we're really - I think that risk-based approach to AI is really important. It's being adopted in all major jurisdictions that have AI-related laws being the U.S., the EU, and to a lesser extent, Canada, where you've got your - we see that adoption of the risk-based approach because - uh - you don't want to regulate all aspects of the technology. It's not necessary.

So I think that in that sense, you know, it becomes - uh - some - you know, I think that's an important factor when you think through what is being related, how do you prevent over-regulation.

[59:43] Yeah, and I'll plug this for you - I know that specifically Inc has been focused on helping organizations understand risk associated with different use cases and - and how they're approaching leveraging artificial intelligence internally. So I asked that question because I think you all have a really unique perspective, and you're talking to a lot of these organizations and trying to understand that risk-based based approach to it. I think the industry in large even - even struggle with defining artificial intelligence consistently, which is kind of silly, but I just think it's so interesting that you mentioned that, and I love that approach because of course we want to regulate the high-risk issues. And I know a lot of other similar analogies like - like nuclear power or like medicine have - have taken that approach as well.

What are some - I think kind of closing thoughts - what are some - some risks that you - I know you mentioned bias, you had mentioned hiring practices - are there other dimensions of risk that - that you've observed?

[1:00:30] When we look at harms - so right now we're still in the process of defining what some of those key harms will be in the AI context, and certainly when you think about it from an AI incident perspective. And so I - you know, having sat at those tables at the OECD, I can tell you that - that's - that's still an area of discussion. But when we - when we think about what are the harms we are trying to prevent, we think about harms related like psychological harms, physical harms, harms related to fundamental rights and freedoms, harms related to property and environment, which is an another really important topic in the world of AI. And those - so when - when you're thinking about what constitutes high risk, you will see in a number of the different proposed or past laws that an impact assessment - a risk or impact assessment is required in order to determine whether or not a system should be subject to greater oversight - forget regulation for a minute, it could be just company-led oversight - but that becomes a really important threshold question.

So the first is "does the system start to feel like AI?" because I agree with you - having a definition, we still don't have a very good definition in place, although the law is starting to codify that a little bit. So is the threshold question one - "is does this - does this look and feel like AI?" and then number two is "well, what is the expected impact of this system? What are some of those ethical considerations, and what are some of the potential harms that may come from that?" flows your decision tree of what you do or don't do to create oversight for that system and what you need to put in place to mitigate whatever risks you identify associated with that system.

So you've got a really - you know, you really have to take an informed perspective of - to simplifying AI governance, frankly, to augmenting what you already have from a data governance perspective and then really targeting those areas where there is a potential risk and trying to understand the risk and mitigate it as much as possible. So there's a lot of - there's some good work that goes into this, but it will become much more fluid once we get - sort of used to it and we get our processes in place. This is just the transition period as we move from more analog systems to much more sophisticated technologies.

[1:02:45] That's what gives me hope is there's a light at the end of the tunnel. This is a transition that - knows how long this transition period will last, but there is - I - I think a hopeful kind of coherence of all of this coming together and - and consistent standards to all of our teams as well. Carol, I agree so very much for your time today. Appreciate you having on. Just for our listeners, how can they reach out to you? How can they get a hold of you?

[1:02:59] I - I would love for them to get a hold of me. I am - you can visit our website at inq.consulting or reach out to me directly - cpiovisan at inq.consulting.

[1:03:11] Nice, and we'll drop that email in the description as well for folks who want to get a hold of you. Thank you, Carol. This was so invitable to us. We really appreciate your time today.

Thank you. My pleasure. Anytime. Thanks.

Thank you.

[1:03:24] More scared or you're more optimistic?

More scared. There's a lot I didn't know what I didn't know. So now I feel like talking to somebody like Carol just opens up the kimono - oh, here I go using another like business - dude, I went so long without using that code. Really opens my eyes to just how much teams need to think about, and again, I talked about empathizing with governance teams before - like holy crap, there's so much to consider and so much coming for teams.

[1:03:50] Yeah, and I'd say like I know a lot of data teams are focused on like the technical problems around this - like tracking lineage - so I think it's great to hear words like "what's the harm we# AI or Die Podcast - Episode 5 Transcript