Episode 1
Job Titles In Data Are Made Up
Mar 11, 2023
About this episode
Welcome to AI or Die's first episode! In this episode, we talk about Brendan's mullet, ChatGPT vs Bard, layoffs in tech, crazy job titles in Data, and much more.
Transcript
[0:02] One, two, three, four. It was intended for the human to support the machine, the machine to support the human network. An AI means something different to anybody you've talked to, which is wild. This is AI or Die. Got Duolingo and go super right, so I'm gonna learn Spanish, which by the way, Duolingo's got like some of the best UX stuff, especially for gamification by far. Like it's just - at least I've been on my Duolingo game as well by the way.
[0:27] Yeah, and I like can't figure out how to upgrade to Max which has all like the AI chat box stuff which sounds really good.
You should friend me. Friend me on Duolingo. Let's speak Spanish together.
Oh there's a whole like friend thing. I've never done Duolingo, so you can like connect with others who are trying to learn and you can like chat with each other. It's got like leaderboards too, so like the whole game is like you get XP as you go through stuff and then you just go up. They have like - uh, the top third gets promoted, the middle third stays in that league, and then the bottom third gets demoted back.
[1:00] So you're just trying to like work up to the highest league that you can. It's great. That's the motivation - is embarrassment of getting demoted, basically.
Correct, yeah yeah. In general, yeah, and you don't want to lose your streak and you're like - I don't know, there's like a lot of good stuff to make you keep coming back because they're like trying to do that micro-learning like hits of like, keeps you, you know, especially it's not your primary language. So like without this, you know, you wouldn't be practicing Spanish, so it's pretty solid. It's pretty cool.
That's the whole reason why I kept playing Wordle for so long was just to not break the streak of like 100 days in a row or something like that. There's a - I don't know what it is, but I just - it feels so wrong to break it.
[1:42] Welcome to podcast number two. We're very excited to kick this second episode off. We have a lot to talk about, but first Brendan, I understand you're going to a meditation retreat coming up, so I just wanted to understand more about that. How long is it? Where are you going? What's it called? Give them a shout out maybe.
[1:53] Yeah, definitely. So this is through the Zen Center of Denver. So I've been going there for a little over a year and I started meditating like right at COVID, so I started using like the apps and stuff. And then right when the center reopened, I started going for the first time, and it's great because it's like based on all the old traditions inside of Buddhism, right, that'll go back for years and years and years.
[2:20] So it's nice because it's like a very time-tested approach to meditation. It's also - the two leaders in that community are both like medical doctors as well, so there's definitely like - you know, the modern kind of Western medical spin into some of the stuff we talk about as well, which is really cool.
And this is going to be pure silent, so no talking, keep your eyes down. Like if you need to say something, you write it down on a slip of paper, really to kind of like keep people's headspace clean, right? Yeah. And then this is just a weekend long one, so it's like Friday to Sunday essentially. And I fly to Columbus on Sunday actually, so that's the best thing to do after that silent meditation retreat is to go to Denver International Airport and fly across the country.
[3:04] So that'll be fun. But this is a shorter retreat - the longer ones are like, you know, the full week. So this is my third one actually, I think, in a year, so I'm pretty excited for it. It's a good like deep tissue massage for your brain to really help you like relax. It's not like a chill like spa weekend or anything like that. It's a lot more of like sitting and, you know, focusing on your breath and things like that. But it's really good way to manage stress and also just to like deepen your mindfulness, so I'm pretty stoked about it. It's gonna be great.
[3:39] Yeah, I imagine you get there and they give you like a robe and sandals and it's like a full-on like "leave who you were, you're here now" and there is no outside communication.
Believe who you were, yeah yeah. It's basically like, you know, turn your phone off of course. Like I got dinged the first time because I was checking Slack, trying to be like checking work stuff. Oh. So I learned not to do that. But yeah, it's a lot of like - you wear like all black, so that's why I also - I have like the all black outfit I wear all the time because like that's, you know, what you do. So, no robes and slippers. Some people wear robes, but it's also like a lay practicing kind of community, so there's more like - it's a different style of getup.
[4:22] But that sounds awesome. I can't wait to hear like what you get out of it.
Yeah, I can't stop talking, so I don't think that would be good for me. Yeah, at home with Keith for like a day is impossible for me. I don't know, it's really hard to do.
Yeah, I've always gone back and forth - like am I an introvert, am I an extrovert? Because I'm pretty in the middle, you know. And then I do these things and I'm like, I'm definitely an extrovert. Like I just need to like talk to people. By the end of the week I'm like biting my lip basically.
[4:52] I don't know about you guys, but I verbally process stuff. Like I like saying things out loud because it helps me like sort through the mess that is my mind. And yeah, I like to verbally process. I think that's part of it. I would still consider myself an introvert, but an introvert that talks a lot.
Even if it's to yourself at home, yeah. So... and Nick, you just got a puppy?
[5:19] Yes, yes. So that's a lot of days of me at home with just - with Alfred - talking to him, talking to him about AI and data and analytics and things like that. Yeah yeah, he loves it. He's getting more into it too.
But yeah, let's get into it because we have a lot to talk about today. Essentially there's a ton in the news and we're looking at even this week in - hours ago - folks releasing new features related to ChatGPT-4. You see NVIDIA is getting into it now. Adobe's getting into it now too. It seems like even just a few days ago, Microsoft started rolling out new features related to Excel, related to their Teams chats and things like that too.
[5:59] So this is a bit of a continuation from episode one, but seeing so many just new features and just kind of a dog pile on the market in terms of leveraging stuff like this too. Have you guys read any of these releases around like Microsoft or NVIDIA or Adobe or any of that?
[6:18] Yeah, it's been really interesting to see like how each company is taking the problems that they solve today and then adding generative AI. And some of them are more like established with stuff that we've seen. So like seeing Adobe Firefly, right, doing more of like the image generation, text generation kind of approach - I guess more image generation for what they're doing. Just interesting to see like kind of these sketches of what AI can do that were very popular and all over Instagram and all over the various, you know, social media things, now getting like cemented into real like heavy duty tools that people use in their workflow at work.
[6:48] So I think that's really interesting to see kind of like crossing into the mainstream. Obviously like Azure opening up their - I think it's called OpenAI Service or something like that, or OpenAI... yeah, OpenAI Cognitive Search as well. So like really integrating that quickly into their offering as well. It just - it's crazy to see how quickly with ChatGPT, the generative AI has really moved so rapidly into a lot of offerings that companies have.
[7:15] Yeah, I'm loving all of the work productivity use cases. I mean, these companies are iterating really, really quickly on incorporating - you know, we've got this meeting, we can record, you know, all of us talking and it can synthesize it into tasks and notes and, you know, all of these really useful tools for work productivity, which I just think is going to be so insanely fundamental. And that shift is going to be really interesting on how companies adopt that and roll that out at their company to try to get people to be more productive.
[7:47] I think this is what we should be leaning on - everyone's talking about this like co-pilot element, which we talked about on the last episode, but all these companies are doubling down, which is great. And I was actually just prepping for a panel discussion at the Data Connect Conference West Coast, which I'll be heading to this week actually - tonight going to Portland - and part of what we were talking about was this element of assessing technology externally.
[8:16] So like, basically, these big companies were saying, "Hey, we have to assess the implications of AI if we're going to be building it," right? So if we're going to build it, then we're going to be liable for the performance of what happens or the implications of it. So, you know, we can keep kicking the can on that because we don't have to build it. People are like, "Yeah, it's nice to have, but we don't have to build it."
Well, now a lot of these companies are having to interface with technologies that use it, and especially with the work productivity wave that we're seeing with AI, it's going to be really hard not to jump into that trend. And so how do these companies set up a framework to evaluate technology to ensure that there's no exposure or issues by utilizing that technology, whether it's in HR, legal, again work productivity, or finance? You know, how do we ensure as a company when we're evaluating tools that we are not exposing risk into the company?
[9:19] So it's just - it's just a fascinating thing. It's one of the things we talked about, and I think now companies don't have the option to kick the can anymore. It is - they are facing this imminent usage of AI.
[9:37] Do you guys see this as a start of really a lot of consolidation? If Microsoft is bringing all these additional features that you might have gone out to another third-party application to use - like a meeting transcription service or one of the other services related to productivity - if Microsoft's just rolling these out, don't they kind of grandfather themselves in in terms of the data privacy and the trust that they've already built with the companies that they are partnered with, just to make it easier to have those features get rolled out instead of leaning on teams like procurement to acquire new tools and think about the security issues with that? It just feels like this is the start of a large consolidation of a lot of the tools that are out there related to these features.
[10:07] I agree. I think it's still going to be the Wild West, in my opinion, for a while. And, you know, just like we saw with YouTube - there's going to be new things that spark up. So with YouTube, when Google acquired YouTube, they ran into these copyright issues that, you know, they weren't - maybe they had some inkling that they needed to anticipate that. I'm not saying that there are copyright issues specifically with generative AI or that those haven't been thought about before - they have - but they ran into that and had to figure out clever ways of dealing with that specifically.
[10:46] And so - and part of that was due to policies that they put in place, and part of that was due to additional technology that they put in the platform. And I think we're going to see a lot of that as well. So there's going to be issues that bubble up by the early adopters of some of these platforms. Like, to assume that Microsoft is just going to figure it out and like, you know, have no risk associated with that - I think is kind of silly, or that some of these big companies are just going to figure it out for us. That historically has never been the case.
[11:17] There's always things here and there that bubble up that they need to address. And I'm not saying that they won't address them, I'm just saying that there will be some collateral damage in some way, shape, or form.
[11:29] You know, I'd say like - I think the consolidation will kind of come in some period of time, like a year or two, couple years out, because right now basically there's been these problem spaces that people are working on, and now the solution space - like the opportunities - rapidly moved forward. So now there's a big open horizon of like, "How can we use generative AI for X?"
And then these big companies like Microsoft will kind of scoop up the cream of the crop of the ones that are successful, because that's kind of how like mobile and web work too, right? Is like massive expansion of opportunities and then consolidation of like kind of the winners into like either big new incumbents or the existing companies kind of scoop them up, right?
[12:07] So I think we'll see a lot of early acquisitions and then probably also a lot of like - seeing which ones are going to win and then a lot of acquisitions in general in this space as like people solve specific problems around the governance, or people solve specific problems around the user experience around some of the generative AI stuff. I just think there'll be a lot of innovation in general around this and therefore a lot of like acquisitions, venture capital, all that good stuff.
[12:33] You know, it's interesting - I haven't really heard a lot about in the news data breaches that have been happening, and I wonder if there's going to be much more data breach events that are happening or getting publicized as a lot of this stuff gets consolidated in the future too. It just doesn't seem like that's a large thing that's happening today. Maybe it's - I'm just not getting exposed to it, but it just seems a little bit quieter on the data breach side of things compared to a few years back.
[12:56] Yeah, my understanding too is like these LLMs are trained on data that's not really specific. So these ChatGPT large language models - so I think they don't require feeding in your data. So that's like the classic problem we face in industrial AI was like, "How do I give my data to this thing safely?" because I'm basically opening up a back door from, you know, security perspective to be able to have that data go through a different system.
So with these LLMs, like unless you do a specific training for them - and I can't remember the term for that, but like feed them your data, right - it's not going to be as much of a risk associated right now. But as they start becoming more hyper-specific with their own data, that would definitely create more need around that for security and other. There'll probably be more data breaches when people start feeding their own data into ChatGPT and things like that.
[13:41] Yeah, they're definitely feeding information in through the prompt. Like even when you sign up for ChatGPT, it tells you don't put any sensitive information into the prompt. So, you know, there's an input factor to that that you feed it in order for it to give you a response. And I also - we mentioned prompt engineering as a title last time too - I'm seeing more and more and more of that interesting over time, so that's been super interesting. I wonder if that'll lead to teams just staffed just to feed the generative AI that is a partnership within an organization in that way.
[14:17] I think - I think personally that it's going to move down in the stack. We're going to start seeing functionality driven by this. Like it alone is fun and interesting and amusing and entertaining to interface with, but we need - it's almost like it needs to be moved down into the stack and then the functionality needs to be built on top of it, which is a lot of what these newer startups or wave of AI startups are going to do.
They're going to take into consideration all of the functionality and the pros and cons of prompting and interfacing with something like ChatGPT. And I think that's necessary. So one of the topics that keeps coming up is this AI hallucination, which is basically you feed it a prompt, it gives you back a response that seems like it's fully valid but is semi-incorrect.
[15:21] So like these platforms and tools and technology built on top of stuff like this - like the LLMs - they need to take some of that into consideration, do their own quality checks. Like there's still a lot of things that need to be built around - or functionality that needs to be built around it. It on its own is interesting, but it is a piece of the overall user experience, which I think is actually a very common thread in the AI space.
We tend to think that the model itself is going to be super powerful, but little do we know that the entire user experience is what's important, and then it feeds a very specific part of that. It is a part of a feature, and a lot of these platforms that are being built - like legal tech, right - you can't just use ChatGPT interface for like legal tech. You need functionality built on top of it.
[16:08] And so that's what's starting to happen. We're seeing Microsoft do that, we're seeing all of these like advanced productivity companies doing that. So that is what's going to be interesting - it's this tiny little piece that powers this bigger experience that we need to take into consideration.
[16:33] Yeah, because it's not like the best lawyers are going to get replaced by AI. It's the best lawyers are going to be the ones that partner with AI to pour through documentation that their same size team couldn't have done a year ago in that way. So it's more about how you're leveraging it as in the flow of your work to get things done in the same way, as opposed to being completely replaced, which I think is a very big misconception.
[16:54] Yeah, and I do - again, like back when the internet started becoming popular and people started figuring out the internet could do, we had to interface with elements of the internet that now we don't interface with, you know? So when we're exposed to things early on, we have to understand how it works a little bit more deeply because that user experience hasn't been built around it yet.
And, you know, obviously browsers have come a long way and like, you know, all of these things that make it very seamless and more generally available. And I think that's going to be the case for the next five to ten years of just building on top of this very fundamental building block that will allow us to interact with it productively and understand maybe some of the implications of it.
[17:37] But again, we're still learning some of the evolutions and challenges and downfalls of the internet as more of those things are built on top of it over time. I just think it's going to be - it's going to be the same thing and we need to be paying attention to it.
[18:00] Yeah, especially the social impact. Who would have thought how when Instagram was first created - how a cute way to share photos of yourself would turn into just the social impact that it's had on especially young people too - which makes me wonder, you know, what you guys are seeing around the explainability of AI companies and teams focusing on that, or how they should think about explainability of AI in the future too. Want to see if you guys have any thoughts related to that?
[18:17] Yeah, definitely. I also just want to touch on like the social impact seems to be very important in the AI space because like ChatGPT's landing page for ChatGPT-4 - like they explicitly call out the alignment and safety pieces associated with it, which I think is interesting to see. Like the general pressure around making sure we have like a stance around these things is getting resonated inside of AI because I don't think, you know, Facebook had to talk about like the social impact of what they were doing, or like web apps or mobile apps had to talk about like the addictiveness about the gate.
[18:52] So I think it is very interesting that like almost at the gate, AI is being upfront about that and cognizant of it because they need to be. And then on the explainability piece, I do think it's very interesting with these LLMs because we did some initial kind of like validation research stuff around using generative AI inside of our product, and like some of the initial questions - especially because our audience is like data scientists - is like, "Can we get somehow some kind of like confidence scores? Like can you give me some context around this prediction or some, you know, details of like how it's coming to these conclusions?"
[19:16] So I think that's something that is going to be an interesting design problem - is like how do we create the trust for a generative AI and like what is the most tangible way for that? Because a data science audience is going to be more, you know, conducive to very rigorous statistical measures, but somebody who's more on the creative side - like with Adobe Firefly, like or, you know, other products that are solving different kinds of problems - like how are they going to be able to speak to their audience and then still have trust in what they're putting out there?
[19:48] Yeah, and I think too - so a couple of thoughts here. The first one is that NIST formally put out their framework around ethical use and responsible use of AI, so it's an AI risk management framework. They're on their second draft now, so that is out there in the wild. And I'm not saying that that should be everybody's like end-all be-all, but it's a good place to start if you're thinking about risk and you're thinking about responsible use of AI in general, even from the development perspective. They include some stuff in there.
[20:18] And so we can flash that up too for everyone so that they can see it. But, you know - and they might be in subsequent draft versions of the framework, but they are actively - them and ISO - they're actively working on standards that we can put in place just like they do for cybersecurity as well. I mean, this is going to be kind of that next evolution of understanding risk at scale.
And that is kind of the number one thing that I think prevents a lot of companies from mass adoption or moving very quickly in that direction. So it includes that, which is the risk exposure. The second is just not knowing what you don't know, which again ties into risk. And then the third being like the cultural shift to move into that direction.
[21:12] So for a lot of these organizations who we call kind of non-native - you know, non-AI native companies, right - they've been around for 50 plus years. Their operations are working in a sense to keep things going, and they're not dependent on AI to function, whereas this new wave of companies is dependent on AI to function.
So I think it puts less urgency around shifting out that building block to something like AI, especially if they didn't start with using it. And so that's one of the most interesting transformations, in my opinion, is what that landscape is going to look like because, again, it is top of mind for everyone.
[21:53] Yeah, you know, we're having conversations with AI lawyers to talk about some of the trends and implications of what they're seeing. And I think it's an interesting thing to note that there are existing laws out there that can still be broken with AI - they don't have to be AI-specific laws. And so that's also an interesting trend. So when you're talking about ethical and responsible use, I think that industry is made - like a lot of progress in the last five-ish years, and now we're seeing some formal frameworks getting published and becoming more popular.
[22:25] Especially with what we talked about in terms of what's coming out of the EU side of things related to trying to set up - you've talked about it as kind of a GDPR for AI and like how to set up that structure around copyright content that's created. I'm seeing a lot more documents released by just the Federal Trade Commission here in the U.S., along with just in the years ahead. I could totally see just much more requirements of companies that are using AI to build an explainability and really have just better documentation around the processes that drive their AI internally.
[22:50] Well yeah, think about - single system that you interfaced with had to tell you that there was AI under the hood, had to tell you that like the output was generated by this - like here's the attribute about you that heavily weighted the output to be X, right? So like imagine if there was that level of transparency.
And I know we go back to like - I try to find a lot of parallels that exist in the world today, so we think about terms and conditions that we all blindly accept whenever we sign up for something for free. But like what if - what if there was a new standard on some of these systems behind the hood that enforced transparency? So as a person, I could understand, you know, what information was driving the outcome that either got me approved for a loan or didn't get me approved for a loan.
[23:21] Like why do I need to explicitly ask and try to find the right person to talk to and try to find the right questions to ask if I don't know? Like we need to think about the general population and the implications of these systems on the general population. And I think that's going to be a really interesting user experience trend with AI over time.
[24:09] Yeah, it's almost like a lot of the labels on products - "Made in the USA" - it's you're going to see like "Made with AI" as like a common stamp you might start seeing on products too. That could be interesting.
Foreign. Just related to companies and their overall programs that they're trying to roll out to help build better transparency around AI models and just like data and analytics in general. And it seems like they're structuring these programs - we hear a lot about literacy programs, data engineering modernization, things like that. It's like they're trying to avoid a certain two words: the L word and the T word. It's like they're trying to avoid learning and training as like the word related to it, and just more of a adoption and more of a change in habits as a change management platform too.
[24:37] So curious - as you guys' thoughts about that. I wonder if it's related to maybe training has a bad connotation and maybe that's more related to advancing your career, whereas this is trying to change habits and work. But very curious as to why these programs are just being structured less about learning and training and more about changing habits: "We're following these processes now, we're doing these things differently, and we need to get everyone aligned at a very large scale across our company."
[25:13] Yeah, just a quick clarification for the listener because I know it's confusing because like learning and training are also like AI modeling terms, so we're talking more about like the people and like the way that they're going to be building and using AI, and not the actual like training and learning of the models themselves. That's always something we get - it's tricky when you're talking AI because there's some collision of those terms. People learning and training, not model learning and training - the old-fashioned wetware, not the software and the hardware, you know.
[25:44] So in general, I'd say like that shift - it feels like it's come from one to focus around like how do we realize the ROI of these initiatives more, I think, is a big piece of that. Yeah. And training and learning are seen more as like development of the people, which is important of course, and companies want to like make sure that that is realized and actualized inside of the organization. But given the, you know, need to show progress around AI, I think there is more of a drive around like what's coming out of this, right?
And training and learning have historically been seen as not being as connected to that. So they want to have organizational change, they want career development, they want to modernize the skills that's inside of the organization. But I think there's more focus now around tying that into like the, you know, development and progress of AI because that's a higher priority than the development piece.
[26:28] It's more of like we need to show results around AI to be competitive, to, you know, really show that we're providing the latest cutting technology. And training and learning are seen as more kind of like slower ways to get there, if you will, whereas things around change management and those pieces are more directly applicable to the output around AI.
[27:00] Yeah, and I love starting with like kind of a definition around some of these things too. When we - you know, there are - and we've talked to so many companies about how they're trying to do this today because whether or not they like it, there's a massive transformation that is happening and will happen even further. So they're trying to figure out how do we get ready for that transformation? And a big piece of that is culture and people.
And whether or not they want to call it training or learning or whatever, there is a fundamental change in the way that people are going to be working, interacting with data, and interfacing with AI systems that's coming, you know, no matter what.
[27:32] And so I think they're trying to structure it in a way that can very quickly and is extremely agile. And what's hard is it's kind of like hitting a moving target. So, you know, new standards are getting released, things are moving very quickly, new technology is getting put out into the industry. So new concepts, new paradigm shifts, I mean, over and over and over again.
So I think a lot of companies are trying to catch up, are like, "Well, it's like hitting a moving target. What are some of the base fundamentals that our people need to know in terms of skills? But more importantly, what are our standards and procedures and processes as a company so that we behave differently?"
[28:09] And I think it's a piece - it's a smaller piece. The actual learning part is a smaller piece of this bigger shift that's happening. And so companies are trying to incorporate that element into the bigger shift, the more strategic shift, whereas the learning and training concept or topic is typically this like bottom-up approach of "let's get everybody to get upskilled on SQL, let's get everybody to learn Python," and then those things are important, but they're not strategically aligned.
And so I think that's where the disconnect is happening now - as they're learning happening in this workforce transformation and like all of these big strategic initiatives. A hundred percent. But they're avoiding those words from what we've seen because they don't want it to get bucketed into this like really obscure kind of overly generalized effort of getting people to learn a very specific skill.
[29:05] And that is one of the other kind of workforce kind of future of work transformations that's happening at the enterprise level, specifically around data and AI type skills.
[29:39] And I'm trying to compare this to other large-scale changes that organizations have had to go through, whether it's adopting more digital practices or thinking through how to adopt agile project methodologies. Does anything come to mind to you guys around how this compares to large transformations that the Lords have gone through in the past?
[29:52] Yeah, I was actually just thinking about that because it is interesting because agile had such a strong training approach, right, and even like additional like project management methodologies or like even in the digital transformation vein. But I wonder if that kind of came later in the life cycle of agile adoption because I think they had already kind of figured out the patterns that were working.
It also like agile was a way to realize more value from digital technology, but it wasn't really the initial onset of digital technology. So I wonder if it's - do of like, you know, that will come in five, ten years or whatever that time horizon looks like, depending on how fast AI transformation goes?
[30:26] And thinking about that too, right now like it'd be hard to train people to AI and data until you figured out how they're going to be using data and AI, right? Where agile is much more like tangible around like this is how we're now going to be working as a team. And we focused on that a lot, obviously, as we did learning and training was like how do you take this and apply it? But we did notice a lot of teams were doing like they were still kind of figuring that out.
So just starting with more broad, you know, broad scattershot of like here are our topics across the industry, here are considerations is good, but they wanted more tangible like, "Hey, we need to crack this nut first, then teach people how to like use this thing."
[31:03] And that's why I see us, you know, focusing more and more - the industry in general focusing more and more on like what is our relationship to this thing, how are we going to use it, let's show some wins, let's scale that out. And then I'm sure there'll be a big mass broadcast of like this is how we work with it going forward.
But I think they're still trying to figure out that initial like couple layers, right? Especially in the broad enterprises. Obviously companies that are digital native or AI native have already kind of cracked that and are doing the broader scaled stuff. But I think that's just kind of where the market is at right now or industry is at right now.
[31:34] Yeah, and see it's kind of broken down into what is it, why is it important, how should we be thinking about this and framing it - which are your big kind of data literacy, AI literacy programs. So how do we do it is a little bit, you know, more nuanced.
[31:51] I wonder if too it's almost like the hub is expanding in terms of people who are touching this new technology and people who are working with it. And it starts really, really tight at a lot of these companies, and then they work to really bring more people into the fold through projects. And then eventually once it gets to a certain point, it makes sense to pull the trigger of the wider education - "right, here's how we're applying it," where that's the difference between some of the more tech native companies where they're already here - they're already at the wide "we know how to work with it, we know how to use it at scale" - whereas it's just tough to expand that hub without knowing exactly how to do it and who to involve.
[32:18] Definitely. I think the like the ones that are at the edge also are looking to share within themselves, right? Because like they are the experts in that area, right? So I don't know if they're going to go outside for general practices, right? They might go like learn about the new thing, right, because then, you know, to understand generative AI. So I think training and learning will always be in this space just given how fast it's moving.
And I think that is unique compared to like digital transformation because obviously digital technology moves super fast, but like I think AI is going to move even faster. So I think like there will be a constant relearning piece to it. And I think it will always be part of like - to be successful with AI, you need to have a very strong approach to like how do we learn the latest stuff, how do we scale the latest stuff.
[33:25] But I think right now people are still kind of in the early days and the enterprise around how do we scale this.
[33:32] Yeah, or how do we even do anything with it? I mean, we still talk to so many people who aren't even touching it, you know? They still want to get a basis around their data, you know? I mean, without good data, what can you even do? And so I think people are still very, very focused on that - very focused on what do we have, how do we identify these use cases around AI, and trying to grasp at almost creating a strategy and setting up the fundamentals.
I think there are a lot of companies still at that point. And I really - and some that may seem like they're - they or may feel like they're too large to fail and they don't need to innovate as quickly against this too. So I mean, that's also a common trend, which will be really fascinating to see how that strategy plays out in the long term.
[34:12] Because that's where I really sympathize with a lot of these CDOs and CTOs who are tasked with this large-scale transformation in the next one to three years. We talked about the ethical risk that goes into it, we talked about understanding the landscape of what data and tools do I have today and how do I involve people, and then also the change management aspect, which is like an iceberg a large-scale organizations.
I keep seeing that the average tenure is like 18 months for a CDO at a typical company. It's so hard to drive that large of a transformation with all of these things you got to worry about on the ML and AI space, along with just a general data literacy space of your organization too. It's quite the task that they're faced with.
[35:00] Yeah, most of their job as being a really, really strong politician. I mean, like it's communication, it's buy-in, it's use case identification, prioritization, budget, you know, like really, really fighting for attention at the company. And what - in whatever that means - money, resources, doesn't matter. Are you good at convincing people that that change is imminent and it is going to be beneficial? Or are you going to focus on the technical complexities, which are actually kind of the easier thing to solve at this point?
[35:32] But yeah, I think a lot of people kind of go up through the CDO route and don't realize that they're going to be in this position where they're gonna have to be the champion and the like - the face of this initiative at their company. And they're gonna have to become really good friends with all of the other leaders across their organization and figure out where they can - where are the pockets that I can be successful?
It is really, really hard job, and I feel bad for some of the CDOs that I talk to that just feel like they're stuck in the mud trying to evangelize for this and get other people on board to try to prioritize it. It is one of the biggest reasons that CDOs don't last super long in a role at a company.
[36:18] Yeah, I think like the way I think about it now is like if we work backwards from what work will look like with AI in five to ten years, like - and we think about there's going to be hundreds and thousands of people inside of each enterprise or inside of each organization really that are leveraging AI or using AI and the flow of work or building custom models or doing whatever it is going to be around this technology.
To get from there, or to get to there from here, I think you can see there's a large amount of this change management work that needs to happen - this learning and training, you know, whatever we want to call that. Like that problem space is very large of how are we really going to get all these people up to speed on how we work and what they need to know? Like that is - I think the piece that is daunting.
[37:07] There's obviously the technical piece of how do you scale to hundreds and thousands of models and hundreds and thousands - or hundreds of data scientists? I think there's also this like we're gonna have thousands of users, we're going to have, you know, a lot of people that need to understand that models and their models need to be retrained and what does that mean and how do we do it? I think that's the piece that we see on the horizon that we're really focused on.
[37:31] Dude, lots of change coming. And it's very exciting. And it seems like we keep hearing the same things as we talk to so many different companies of different shapes and sizes and really industries as well too. So it definitely feels like a lot of consolidation or hive mind - I'm trying to think of the right word here - related to just facing the same problems and trying to figure out large-scale ways to address these issues, especially from the CDO level all the way down to just the average person working at their desk and figuring out how to work with AI as well too.
So yeah, we've seen a lot of people be very protective about their approach because they've spent so much time designing it and trying to get it adopted internally, which is kind of what we want to help them do, you know? Like the thing I keep hearing is what is the industry doing? Like what are the best practices in the industry? How - how do we incorporate what we need when we need it in our company? How do we set company level policies and standards that are specific to our industry? And then how do we get that broken down even further to realizing that with our people and our technology to support it?
[38:33] And so it's this kind of funnel of experts kind of moving around to different companies and setting those strategies and setting best practices and these big initiatives that are really hard and take a lot of budget and driving that down into this is how the company is going to operationalize this at a large scale and adopt this across the organization.
And so I think there's this funnel of like, "Hey, this top part is moving really quickly, so therefore you have to adjust what you're doing very quickly, and then you've got to get your people to adopt that really quickly." So this is a really hard thing, and of course you've got opinions - everybody's got opinions on what is going to be most effective for their company, company culture, their data, their industry, their use cases. That's great.
But there are some things that are starting to establish as like standards - like I said, the NIST framework, ISO is coming out with some stuff too - and these kind of paradigm shifts that are happening. And you've got to have a really good way of managing all of that.
[39:22] And I really think that shifts the roadmap from within the next one to three years to as soon as possible or by a certain date based on a lot of those requirements coming out too.
[39:59] Yeah, and I think like just to tie it back to you talk to like the other transformations that have happened largely - I personally know agile very well, and I think like the learning for how change management in previous kind of paradigms has gone - I would say is like make sure we focus on the problems with AI - like what are we really solving for? Because I think agile got - you know, all these hype cycles kind of come through around technology of like DevOps, agile - and they do solve really important problems.
I just think we need to make sure that we are focused on solving those problems and not just like doing the latest buzzword, right? And jumping in on the bandwagon, right, of like this is - everybody's doing it so therefore I'm doing it, right? Like I think each of these need to be very fine-tuned into how that organization is working and what problems they're facing and what their users, customers, you know, their stakeholders really need around AI and data, just so that we're not stuck in the same fatigue that you hear with a lot of the other transformation initiatives that have gone on where it's like, "Why are we even doing this anymore? More just to do it?" Right? Just to check a box.
I think it really needs to be anchored in the real problems that we're solving.
[41:01] Yeah, because it gets old quick and it leaves a sour taste in your mouth if you don't establish early on what's in it for you as the employee, what's in it for you as the manager of a certain team, and really tying it back to "this makes your job easier" or "this makes you focus on much more interesting problems because of it." And that translation is a lot of what we're seeing right now just that - yeah, positioning, you know?
And there's a lot of fatigue and change, and you don't have it figured out as a company on your approach and you roll something out that is flawed - that is going to cause a lot of frustration for a lot of people. I mean, there - people get fatigued by that like "my company just doesn't know what it's doing" or "every single time they make a change it doesn't work," and they get frustrated and they don't have the ability to kind of feed back into that loop, you know?
[41:42] And I think that's the other fear that companies have. It's like we're changing something so fundamental - like now every single job is a data job - like okay, well, we better get that right. Like we better understand what that is, get that right before we start telling people to do their job differently.
And I think that's another fear - yet another fear that people have. And how do you organize all of that? And how do you make sure you're ready? You couple that with the macroeconomic things that are happening related to job instability and just people working in all these different places - hybrid at the place or at home too - it's a crazy time right now. It really is.
[42:12] This has been good, guys. Any other things you want to speak to just related to some of the topics before we wrap up?
[42:18] No, I think we covered a lot of really core things that people are thinking about that at least I've heard in conversations even in the last couple of weeks, you know? A lot of leaders that I've been talking to inside of these companies and what they're thinking about, what's top of mind for them.
They're thinking about these fundamental culture shifts. And I think what is so incredibly fascinating is that we've had the opportunity to work with really, really talented AI native companies - companies that started with the value proposition of utilizing AI for a very specific function, right? It is ingrained into the company and what the company is and the value it provides.
And then we've worked with a lot of these other companies where that's not the case. And so what's interesting is looking at the culture between them, and this - you know, this isn't going to work for the other company, right? Like one culture is not going to work for the other culture for the other type of company.
[43:12] And so how do you do that? How do you shift an entire company culture - every single person at the organization - and shift their mindset? I think that's just such an interesting - it's an interesting problem to try to solve, and it would - and it's sparked by this need to have all of these fundamentals shifted in the way people are working.
And I am really interested to see how people start to crack that. I mean, there's little things here and there like we talked about learning and training, but there's also this like community element and, you know, showing wins. And so I think continuing to track what people are doing to to make that happen will be quite fascinating.
[43:48] Yeah, that's certainly super interesting. And one of the largest misconceptions that we see - you can't copy-paste how Google operates into your company, how Meta operates into your company. It's not a leapfrog step where you can just go and grab what they're doing and apply it. You have your own unique challenges you need to address, and there is no easy route to just - really well, yeah - to the next step where they're at.
[44:06] Yeah, and I think one of the things I walked away from this conversation with - excited about - is just like the opportunity that we have around data and AI and everybody that's kind of working on it is just like this is the next big problem to figure out, not only the technology and the technical issues around that, but also just like the change management, the people side, the process side.
So it's very exciting to be working on these types of problems just because I have the opportunity to speak with like somebody who's further in their career around engineering and, you know, he'd spent a lot of time at IBM and, you know, like worked on a lot of hard problems in going from mainframe to, you know, a lot of the evolutions that have happened on the computer science side. So and he's learning Python right now, getting up on artificial intelligence and machine learning stuff.
[44:35] So it's just cool to see like that's the nature of technology, right? And solving the next problem, understanding the next thing we got to kind of connect the next nut to crack, right? So it's cool to be able to work on some of these problems around AI, machine learning. It's very invigorating.
[44:45] So it is super fun, super global. It seems like every company is trying to think about this and crack this nut too, so which makes it super fascinating as well - just the people and the solutions that they're coming up with.
Agree. Company, I'm going to use "crack this nut" anytime I get it. It's gonna be interesting sentence.
Or two weeks we're going to be fatigued to that already.
[44:58] Races. What would be really fascinating is let's transcribe all of the podcasts, let's look at all the phrases that we say constantly. "Crack this nut" as much.
Oh my gosh, you saw that Venn diagram of all the different corporate terms that people are saying? Food related, you know, those are just terrible. I feel like we - we always like culturally grab them too.
[45:16] Oh yeah, because we're talking to these large teams and these large enterprises, you can't help but pick up corporate speak in a way, which, you know, when I use it, I'm surprising myself sometimes when I say it just because I'm hearing it in conversation left or right.
[45:27] With that, guys, this is AI or Die. We discussed the news and trends in the great debate of AI. You can listen or die by going to our website getillini.com, and then you can subscribe or die on any of the places that you find podcasts as well. We had a blast on this one. We're working on some really exciting episodes that will be coming up soon, and check in on our site as we release new episodes - it'll get started there. So thank you everybody for your time today. We're gonna go and wrap this one up.