Episode 2
Is ChatGPT having AI Hallucinations?
Mar 29, 2023
About this episode
Brendan’s Meditation Retreat
New ChatGPT features
Generative AI being adopted by companies
AI Hallucinations
Are companies avoiding the L and T words?
Transcript
[0:02] One, two, three, four. It was intended for the human to support the machine, the machine to support the human network. An AI means something different to anybody you've talked to, which is wild. This is AI or Die. Got Duolingo and go super right, so I'm gonna learn Spanish, which by the way, Duolingo's got like some of the best UX stuff, especially for gamification by far. Like it's just - at least I've been on my Duolingo game as well by the way.
[0:27] Yeah, and I like can't figure out how to upgrade to Max which has all like the AI chat box stuff which sounds really good.
You should friend me. Friend me on Duolingo. Let's speak Spanish together.
Oh there's a whole like friend thing. I've never done Duolingo, so you can like connect with others who are trying to learn and you can like chat with each other. It's got like leaderboards too, so like the whole game is like you get XP as you go through stuff and then you just go up. They have like - uh, the top third gets promoted, the middle third stays in that league, and then the bottom third gets demoted back.
[1:00] So you're just trying to like work up to the highest league that you can. It's great. That's the motivation - is embarrassment of getting demoted, basically.
Correct, yeah yeah. In general, yeah, and you don't want to lose your streak and you're like - I don't know, there's like a lot of good stuff to make you keep coming back because they're like trying to do that micro-learning like hits of like, keeps you, you know, especially it's not your primary language. So like without this, you know, you wouldn't be practicing Spanish, so it's pretty solid. It's pretty cool.
That's the whole reason why I kept playing Wordle for so long was just to not break the streak of like 100 days in a row or something like that. There's a - I don't know what it is, but I just - it feels so wrong to break it.
[1:42] Welcome to podcast number two. We're very excited to kick this second episode off. We have a lot to talk about, but first Brendan, I understand you're going to a meditation retreat coming up, so I just wanted to understand more about that. How long is it? Where are you going? What's it called? Give them a shout out maybe.
[1:53] Yeah, definitely. So this is through the Zen Center of Denver. So I've been going there for a little over a year and I started meditating like right at COVID, so I started using like the apps and stuff. And then right when the center reopened, I started going for the first time, and it's great because it's like based on all the old traditions inside of Buddhism, right, that'll go back for years and years and years.
[2:20] So it's nice because it's like a very time-tested approach to meditation. It's also - the two leaders in that community are both like medical doctors as well, so there's definitely like - you know, the modern kind of Western medical spin into some of the stuff we talk about as well, which is really cool.
And this is going to be pure silent, so no talking, keep your eyes down. Like if you need to say something, you write it down on a slip of paper, really to kind of like keep people's headspace clean, right? Yeah. And then this is just a weekend long one, so it's like Friday to Sunday essentially. And I fly to Columbus on Sunday actually, so that's the best thing to do after that silent meditation retreat is to go to Denver International Airport and fly across the country.
[3:04] So that'll be fun. But this is a shorter retreat - the longer ones are like, you know, the full week. So this is my third one actually, I think, in a year, so I'm pretty excited for it. It's a good like deep tissue massage for your brain to really help you like relax. It's not like a chill like spa weekend or anything like that. It's a lot more of like sitting and, you know, focusing on your breath and things like that. But it's really good way to manage stress and also just to like deepen your mindfulness, so I'm pretty stoked about it. It's gonna be great.
[3:39] Yeah, I imagine you get there and they give you like a robe and sandals and it's like a full-on like "leave who you were, you're here now" and there is no outside communication.
Believe who you were, yeah yeah. It's basically like, you know, turn your phone off of course. Like I got dinged the first time because I was checking Slack, trying to be like checking work stuff. Oh. So I learned not to do that. But yeah, it's a lot of like - you wear like all black, so that's why I also - I have like the all black outfit I wear all the time because like that's, you know, what you do. So, no robes and slippers. Some people wear robes, but it's also like a lay practicing kind of community, so there's more like - it's a different style of getup.
[4:22] But that sounds awesome. I can't wait to hear like what you get out of it.
Yeah, I can't stop talking, so I don't think that would be good for me. Yeah, at home with Keith for like a day is impossible for me. I don't know, it's really hard to do.
Yeah, I've always gone back and forth - like am I an introvert, am I an extrovert? Because I'm pretty in the middle, you know. And then I do these things and I'm like, I'm definitely an extrovert. Like I just need to like talk to people. By the end of the week I'm like biting my lip basically.
[4:52] I don't know about you guys, but I verbally process stuff. Like I like saying things out loud because it helps me like sort through the mess that is my mind. And yeah, I like to verbally process. I think that's part of it. I would still consider myself an introvert, but an introvert that talks a lot.
Even if it's to yourself at home, yeah. So... and Nick, you just got a puppy?
[5:19] Yes, yes. So that's a lot of days of me at home with just - with Alfred - talking to him, talking to him about AI and data and analytics and things like that. Yeah yeah, he loves it. He's getting more into it too.
But yeah, let's get into it because we have a lot to talk about today. Essentially there's a ton in the news and we're looking at even this week in - hours ago - folks releasing new features related to ChatGPT-4. You see NVIDIA is getting into it now. Adobe's getting into it now too. It seems like even just a few days ago, Microsoft started rolling out new features related to Excel, related to their Teams chats and things like that too.
[5:59] So this is a bit of a continuation from episode one, but seeing so many just new features and just kind of a dog pile on the market in terms of leveraging stuff like this too. Have you guys read any of these releases around like Microsoft or NVIDIA or Adobe or any of that?
[6:18] Yeah, it's been really interesting to see like how each company is taking the problems that they solve today and then adding generative AI. And some of them are more like established with stuff that we've seen. So like seeing Adobe Firefly, right, doing more of like the image generation, text generation kind of approach - I guess more image generation for what they're doing. Just interesting to see like kind of these sketches of what AI can do that were very popular and all over Instagram and all over the various, you know, social media things, now getting like cemented into real like heavy duty tools that people use in their workflow at work.
[6:48] So I think that's really interesting to see kind of like crossing into the mainstream. Obviously like Azure opening up their - I think it's called OpenAI Service or something like that, or OpenAI... yeah, OpenAI Cognitive Search as well. So like really integrating that quickly into their offering as well. It just - it's crazy to see how quickly with ChatGPT, the generative AI has really moved so rapidly into a lot of offerings that companies have.
[7:15] Yeah, I'm loving all of the work productivity use cases. I mean, these companies are iterating really, really quickly on incorporating - you know, we've got this meeting, we can record, you know, all of us talking and it can synthesize it into tasks and notes and, you know, all of these really useful tools for work productivity, which I just think is going to be so insanely fundamental. And that shift is going to be really interesting on how companies adopt that and roll that out at their company to try to get people to be more productive.
[7:47] I think this is what we should be leaning on - everyone's talking about this like co-pilot element, which we talked about on the last episode, but all these companies are doubling down, which is great. And I was actually just prepping for a panel discussion at the Data Connect Conference West Coast, which I'll be heading to this week actually - tonight going to Portland - and part of what we were talking about was this element of assessing technology externally.
[8:16] So like, basically, these big companies were saying, "Hey, we have to assess the implications of AI if we're going to be building it," right? So if we're going to build it, then we're going to be liable for the performance of what happens or the implications of it. So, you know, we can keep kicking the can on that because we don't have to build it. People are like, "Yeah, it's nice to have, but we don't have to build it."
Well, now a lot of these companies are having to interface with technologies that use it, and especially with the work productivity wave that we're seeing with AI, it's going to be really hard not to jump into that trend. And so how do these companies set up a framework to evaluate technology to ensure that there's no exposure or issues by utilizing that technology, whether it's in HR, legal, again work productivity, or finance? You know, how do we ensure as a company when we're evaluating tools that we are not exposing risk into the company?
[9:19] So it's just - it's just a fascinating thing. It's one of the things we talked about, and I think now companies don't have the option to kick the can anymore. It is - they are facing this imminent usage of AI.
[9:37] Do you guys see this as a start of really a lot of consolidation? If Microsoft is bringing all these additional features that you might have gone out to another third-party application to use - like a meeting transcription service or one of the other services related to productivity - if Microsoft's just rolling these out, don't they kind of grandfather themselves in in terms of the data privacy and the trust that they've already built with the companies that they are partnered with, just to make it easier to have those features get rolled out instead of leaning on teams like procurement to acquire new tools and think about the security issues with that? It just feels like this is the start of a large consolidation of a lot of the tools that are out there related to these features.
[10:07] I agree. I think it's still going to be the Wild West, in my opinion, for a while. And, you know, just like we saw with YouTube - there's going to be new things that spark up. So with YouTube, when Google acquired YouTube, they ran into these copyright issues that, you know, they weren't - maybe they had some inkling that they needed to anticipate that. I'm not saying that there are copyright issues specifically with generative AI or that those haven't been thought about before - they have - but they ran into that and had to figure out clever ways of dealing with that specifically.
[10:46] And so - and part of that was due to policies that they put in place, and part of that was due to additional technology that they put in the platform. And I think we're going to see a lot of that as well. So there's going to be issues that bubble up by the early adopters of some of these platforms. Like, to assume that Microsoft is just going to figure it out and like, you know, have no risk associated with that - I think is kind of silly, or that some of these big companies are just going to figure it out for us. That historically has never been the case.
[11:17] There's always things here and there that bubble up that they need to address. And I'm not saying that they won't address them, I'm just saying that there will be some collateral damage in some way, shape, or form.
[11:29] You know, I'd say like - I think the consolidation will kind of come in some period of time, like a year or two, couple years out, because right now basically there's been these problem spaces that people are working on, and now the solution space - like the opportunities - rapidly moved forward. So now there's a big open horizon of like, "How can we use generative AI for X?"
And then these big companies like Microsoft will kind of scoop up the cream of the crop of the ones that are successful, because that's kind of how like mobile and web work too, right? Is like massive expansion of opportunities and then consolidation of like kind of the winners into like either big new incumbents or the existing companies kind of scoop them up, right?
[12:07] So I think we'll see a lot of early acquisitions and then probably also a lot of like - seeing which ones are going to win and then a lot of acquisitions in general in this space as like people solve specific problems around the governance, or people solve specific problems around the user experience around some of the generative AI stuff. I just think there'll be a lot of innovation in general around this and therefore a lot of like acquisitions, venture capital, all that good stuff.
[12:33] You know, it's interesting - I haven't really heard a lot about in the news data breaches that have been happening, and I wonder if there's going to be much more data breach events that are happening or getting publicized as a lot of this stuff gets consolidated in the future too. It just doesn't seem like that's a large thing that's happening today. Maybe it's - I'm just not getting exposed to it, but it just seems a little bit quieter on the data breach side of things compared to a few years back.
[12:56] Yeah, my understanding too is like these LLMs are trained on data that's not really specific. So these ChatGPT large language models - so I think they don't require feeding in your data. So that's like the classic problem we face in industrial AI was like, "How do I give my data to this thing safely?" because I'm basically opening up a back door from, you know, security perspective to be able to have that data go through a different system.
So with these LLMs, like unless you do a specific training for them - and I can't remember the term for that, but like feed them your data, right - it's not going to be as much of a risk associated right now. But as they start becoming more hyper-specific with their own data, that would definitely create more need around that for security and other. There'll probably be more data breaches when people start feeding their own data into ChatGPT and things like that.
[13:41] Yeah, they're definitely feeding information in through the prompt. Like even when you sign up for ChatGPT, it tells you don't put any sensitive information into the prompt. So, you know, there's an input factor to that that you feed it in order for it to give you a response. And I also - we mentioned prompt engineering as a title last time too - I'm seeing more and more and more of that interesting over time, so that's been super interesting. I wonder if that'll lead to teams just staffed just to feed the generative AI that is a partnership within an organization in that way.
[14:17] I think - I think personally that it's going to move down in the stack. We're going to start seeing functionality driven by this. Like it alone is fun and interesting and amusing and entertaining to interface with, but we need - it's almost like it needs to be moved down into the stack and then the functionality needs to be built on top of it, which is a lot of what these newer startups or wave of AI startups are going to do.
They're going to take into consideration all of the functionality and the pros and cons of prompting and interfacing with something like ChatGPT. And I think that's necessary. So one of the topics that keeps coming up is this AI hallucination, which is basically you feed it a prompt, it gives you back a response that seems like it's fully valid but is semi-incorrect.
[15:21] So like these platforms and tools and technology built on top of stuff like this - like the LLMs - they need to take some of that into consideration, do their own quality checks. Like there's still a lot of things that need to be built around - or functionality that needs to be built around it. It on its own is interesting, but it is a piece of the overall user experience, which I think is actually a very common thread in the AI space.
We tend to think that the model itself is going to be super powerful, but little do we know that the entire user experience is what's important, and then it feeds a very specific part of that. It is a part of a feature, and a lot of these platforms that are being built - like legal tech, right - you can't just use ChatGPT interface for like legal tech. You need functionality built on top of it.
[16:08] And so that's what's starting to happen. We're seeing Microsoft do that, we're seeing all of these like advanced productivity companies doing that. So that is what's going to be interesting - it's this tiny little piece that powers this bigger experience that we need to take into consideration.
[16:33] Yeah, because it's not like the best lawyers are going to get replaced by AI. It's the best lawyers are going to be the ones that partner with AI to pour through documentation that their same size team couldn't have done a year ago in that way. So it's more about how you're leveraging it as in the flow of your work to get things done in the same way, as opposed to being completely replaced, which I think is a very big misconception.
[16:54] Yeah, and I do - again, like back when the internet started becoming popular and people started figuring out the internet could do, we had to interface with elements of the internet that now we don't interface with, you know? So when we're exposed to things early on, we have to understand how it works a little bit more deeply because that user experience hasn't been built around it yet.
And, you know, obviously browsers have come a long way and like, you know, all of these things that make it very seamless and more generally available. And I think that's going to be the case for the next five to ten years of just building on top of this very fundamental building block that will allow us to interact with it productively and understand maybe some of the implications of it.
[17:37] But again, we're still learning some of the evolutions and challenges and downfalls of the internet as more of those things are built on top of it over time. I just think it's going to be - it's going to be the same thing and we need to be paying attention to it.
[18:00] Yeah, especially the social impact. Who would have thought how when Instagram was first created - how a cute way to share photos of yourself would turn into just the social impact that it's had on especially young people too - which makes me wonder, you know, what you guys are seeing around the explainability of AI companies and teams focusing on that, or how they should think about explainability of AI in the future too. Want to see if you guys have any thoughts related to that?
[18:17] Yeah, definitely. I also just want to touch on like the social impact seems to be very important in the AI space because like ChatGPT's landing page for ChatGPT-4 - like they explicitly call out the alignment and safety pieces associated with it, which I think is interesting to see. Like the general pressure around making sure we have like a stance around these things is getting resonated inside of AI because I don't think, you know, Facebook had to talk about like the social impact of what they were doing, or like web apps or mobile apps had to talk about like the addictiveness about the gate.
[18:52] So I think it is very interesting that like almost at the gate, AI is being upfront about that and cognizant of it because they need to be. And then on the explainability piece, I do think it's very interesting with these LLMs because we did some initial kind of like validation research stuff around using generative AI inside of our product, and like some of the initial questions - especially because our audience is like data scientists - is like, "Can we get somehow some kind of like confidence scores? Like can you give me some context around this prediction or some, you know, details of like how it's coming to these conclusions?"
[19:16] So I think that's something that is going to be an interesting design problem - is like how do we create the trust for a generative AI and like what is the most tangible way for that? Because a data science audience is going to be more, you know, conducive to very rigorous statistical measures, but somebody who's more on the creative side - like with Adobe Firefly, like or, you know, other products that are solving different kinds of problems - like how are they going to be able to speak to their audience and then still have trust in what they're putting out there?
[19:48] Yeah, and I think too - so a couple of thoughts here. The first one is that NIST formally put out their framework around ethical use and responsible use of AI, so it's an AI risk management framework. They're on their second draft now, so that is out there in the wild. And I'm not saying that that should be everybody's like end-all be-all, but it's a good place to start if you're thinking about risk and you're thinking about responsible use of AI in general, even from the development perspective. They include some stuff in there.
[20:18] And so we can flash that up too for everyone so that they can see it. But, you know - and they might be in subsequent draft versions of the framework, but they are actively - them and ISO - they're actively working on standards that we can put in place just like they do for cybersecurity as well. I mean, this is going to be kind of that next evolution of understanding risk at scale.
And that is kind of the number one thing that I think prevents a lot of companies from mass adoption or moving very quickly in that direction. So it includes that, which is the risk exposure. The second is just not knowing what you don't know, which again ties into risk. And then the third being like the cultural shift to move into that direction.
[21:12] So for a lot of these organizations who we call kind of non-native - you know, non-AI native companies, right - they've been around for 50 plus years. Their operations are working in a sense to keep things going, and they're not dependent on AI to function, whereas this new wave of companies is dependent on AI to function.
So I think it puts less urgency around shifting out that building block to something like AI, especially if they didn't start with using it. And so that's one of the most interesting transformations, in my opinion, is what that landscape is going to look like because, again, it is top of mind for everyone.
[21:53] Yeah, you know, we're having conversations with AI lawyers to talk about some of the trends and implications of what they're seeing. And I think it's an interesting thing to note that there are existing laws out there that can still be broken with AI - they don't have to be AI-specific laws. And so that's also an interesting trend. So when you're talking about ethical and responsible use, I think that industry is made - like a lot of progress in the last five-ish years, and now we're seeing some formal frameworks getting published and becoming more popular.
[22:25] Especially with what we talked about in terms of what's coming out of the EU side of things related to trying to set up - you've talked about it as kind of a GDPR for AI and like how to set up that structure around copyright content that's created. I'm seeing a lot more documents released by just the Federal Trade Commission here in the U.S., along with just in the years ahead. I could totally see just much more requirements of companies that are using AI to build an explainability and really have just better documentation around the processes that drive their AI internally.
[22:50] Well yeah, think about - single system that you interfaced with had to tell you that there was AI under the hood, had to tell you that like the output was generated by this - like here's the attribute about you that heavily weighted the output to be X, right? So like imagine if there was that level of transparency.
And I know we go back to like - I try to find a lot of parallels that exist in the world today, so we think about terms and conditions that we all blindly accept whenever we sign up for something for free. But like what if - what if there was a new standard on some of these systems behind the hood that enforced transparency? So as a person, I could understand, you know, what information was driving the outcome that either got me approved for a loan or didn't get me approved for a loan.
[23:21] Like why do I need to explicitly ask and try to find the right person to talk to and try to find the right questions to ask if I don't know? Like we need to think about the general population and the implications of these systems on the general population. And I think that's going to be a really interesting user experience trend with AI over time.
[24:09] Yeah, it's almost like a lot of the labels on products - "Made in the USA" - it's you're going to see like "Made with AI" as like a common stamp you might start seeing on products too. That could be interesting.
Foreign. Just related to companies and their overall programs that they're trying to roll out to help build better transparency around AI models and just like data and analytics in general. And it seems like they're structuring these programs - we hear a lot about literacy programs, data engineering modernization, things like that. It's like they're trying to avoid a certain two words: the L word and the T word. It's like they're trying to avoid learning and training as like the word related to it, and just more of a adoption and more of a change in habits as a change management platform too.
[24:37] So curious - as you guys' thoughts about that. I wonder if it's related to maybe training has a bad connotation and maybe that's more related to advancing your career, whereas this is trying to change habits and work. But very curious as to why these programs are just being structured less about learning and training and more about changing habits: "We're following these processes now, we're doing these things differently, and we need to get everyone aligned at a very large scale across our company."
[25:13] Yeah, just a quick clarification for the listener because I know it's confusing because like learning and training are also like AI modeling terms, so we're talking more about like the people and like the way that they're going to be building and using AI, and not the actual like training and learning of the models themselves. That's always something we get - it's tricky when you're talking AI because there's some collision of those terms. People learning and training, not model learning and training - the old-fashioned wetware, not the software and the hardware, you know.
[25:44] So in general, I'd say like that shift - it feels like it's come from one to focus around like how do we realize the ROI of these initiatives more, I think, is a big piece of that. Yeah. And training and learning are seen more as like development of the people, which is important of course, and companies want to like make sure that that is realized and actualized inside of the organization. But given the, you know, need to show progress around AI, I think there is more of a drive around like what's coming out of this, right?
And training and learning have historically been seen as not being as connected to that. So they want to have organizational change, they want career development, they want to modernize the skills that's inside of the organization. But I think there's more focus now around tying that into like the, you know, development and progress of AI because that's a higher priority than the development piece.
[26:28] It's more of like we need to show results around AI to be competitive, to, you know, really show that we're providing the latest cutting technology. And training and learning are seen as more kind of like slower ways to get there, if you will, whereas things around change management and those pieces are more directly applicable to the output around AI.
[27:00] Yeah, and I love starting with like kind of a definition around some of these things too. When we - you know, there are - and we've talked to so many companies about how they're trying to do this today because whether or not they like it, there's a massive transformation that is happening and will happen even further. So they're trying to figure out how do we get ready for that transformation? And a big piece of that is culture and people.
And whether or not they want to call it training or learning or whatever, there is a fundamental change in the way that people are going to be working, interacting with data, and interfacing with AI systems that's coming, you know, no matter what.
[27:32] And so I think they're trying to structure it in a way that can very quickly and is extremely agile. And what's hard is it's kind of like hitting a moving target. So, you know, new standards are getting released, things are moving very quickly, new technology is getting put out into the industry. So new concepts, new paradigm shifts, I mean, over and over and over again.
So I think a lot of companies are trying to catch up, are like, "Well, it's like hitting a moving target. What are some of the base fundamentals that our people need to know in terms of skills? But more importantly, what are our standards and procedures and processes as a company so that we behave differently?"
[28:09] And I think it's a piece - it's a smaller piece. The actual learning part is a smaller piece of this bigger shift that's happening. And so companies are trying to incorporate that element into the bigger shift, the more strategic shift, whereas the learning and training concept or topic is typically this like bottom-up approach of "let's get everybody to get upskilled on SQL, let's get everybody to learn Python," and then those things are important, but they're not strategically aligned.
And so I think that's where the disconnect is happening now - as they're learning happening in this workforce transformation and like all of these big strategic initiatives. A hundred percent. But they're avoiding those words from what we've seen because they don't want it to get bucketed into this like really obscure kind of overly generalized effort of getting people to learn a very specific skill.
[29:05] And that is one of the other kind of workforce kind of future of work transformations that's happening at the enterprise level, specifically around data and AI type skills.
[29:39] And I'm trying to compare this to other large-scale changes that organizations have had to go through, whether it's adopting more digital practices or thinking through how to adopt agile project methodologies. Does anything come to mind to you guys around how this compares to large transformations that the Lords have gone through in the past?
[29:52] Yeah, I was actually just thinking about that because it is interesting because agile had such a strong training approach, right, and even like additional like project management methodologies or like even in the digital transformation vein. But I wonder if that kind of came later in the life cycle of agile adoption because I think they had already kind of figured out the patterns that were working.
It also like agile was a way to realize more value from digital technology, but it wasn't really the initial onset of digital technology. So I wonder if it's - do of like, you know, that will come in five, ten years or whatever that time horizon looks like, depending on how fast AI transformation goes?
[30:26] And thinking about that too, right now like it'd be hard to train people to AI and data until you figured out how they're going to be using data and AI, right? Where agile is much more like tangible around like this is how we're now going to be working as a team. And we focused on that a lot, obviously, as we did learning and training was like how do you take this and apply it? But we did notice a lot of teams were doing like they were still kind of figuring that out.
So just starting with more broad, you know, broad scattershot of like here are our topics across the industry, here are considerations is good, but they wanted more tangible like, "Hey, we need to crack this nut first, then teach people how to like use this thing."
[31:03] And that's why I see us, you know, focusing more and more - the industry in general focusing more and more on like what is our relationship to this thing, how are we going to use it, let's show some wins, let's scale that out. And then I'm sure there'll be a big mass broadcast of like this is how we work with it going forward.
But I think they're still trying to figure out that initial like couple layers, right? Especially in the broad enterprises. Obviously companies that are digital native or AI native have already kind of cracked that and are doing the broader scaled stuff. But I think that's just kind of where the market is at right now or industry is at right now.
[31:34] Yeah, and see it's kind of broken down into what is it, why is it important, how should we be thinking about this and framing it - which are your big kind of data literacy, AI literacy programs. So how do we do it is a little bit, you know, more nuanced.
[31:51] I wonder if too it's almost like the hub is expanding in terms of people who are touching this new technology and people who are working with it. And it starts really, really tight at a lot of these companies, and then they work to really bring more people into the fold through projects. And then eventually once it gets to a certain point, it makes sense to pull the trigger of the wider education - "right, here's how we're applying it," where that's the difference between some of the more tech native companies where they're already here - they're already at the wide "we know how to work with it, we know how to use it at scale" - whereas it's just tough to expand that hub without knowing exactly how to do it and who to involve.
[32:18] Definitely. I think the like the ones that are at the edge also are looking to share within themselves, right?