28
DOGE, DeepSeek, and data laws: everything we missed since FILED Season 2
Much has happened in the few months since the end of FILED Season 2. To kick off the new season, Kris and Anthony get together to discuss the biggest issues, including the Trump administration’s so-called Department of Government Efficiency, and its access to government data, and everything that happened in the AI world, from DeepSeek to a new AI direction for the US federal government and the wider world.
They also discuss:
- Elon Musk and the Department of Government Efficiency
- Challenges in data governance and transparency
- Data transparency and open government initiatives over the years
- The role of data management in AI
- Wider trends in AI regulation, including in the European Union
- The road ahead for governance and AI
Resources:
- 📨 FILED Newsletter: 2024 in privacy, security, and AI
- 📨 FILED Newsletter: 2025: what to expect in privacy, security and AI
- 📨 FILED Newsletter: AI regulation begins to bite, just in time
Transcript
Anthony: Welcome to FILED, a monthly conversation with those at the convergence of data, privacy, data, security, data, regulations, records, and governance. I'm Anthony Woodward, the CEO of RecordPoint. And with me today is my cohost, Kris Brown, our executive vice president of partners, evangelism, and solution engineering. New role, Kris new year.
Kris: Yeah, thanks mate. You're going to have a lot of fun with that one this year. I think we may need to do something about that before the year is out in terms of how do we streamline that. I think if you look at the letters, it's something like EV Pepsi or something like that, I think is what it shortens to.
We can, maybe we can roll there.
Anthony: Oh, I like it. EV Pepsi it is from now on. I'm definitely rolling with that. Fantastic. Well, we'll continue to work on acronyms and other processes for that. It's been a mad start to the year already, Kris, there is so much happening. It's kind of crazy. We recorded our last podcast in December towards the back of 24 and we really wanted to start season three by looking at all of the events that have happened in the last little while and really pick up on some really fast moving and changing topics that appear to really matter to our part of the industry.
Kris: Yeah. It's been wild. I don't think you can turn away, it's a little bit “train wrecky” in a sense. There's been a lot going on just in general, but if we be really specific about our industry and just data, there's so much going on.
Anthony: Yeah, it's crazy. I mean, we've seen the Trump administration bring out the department of government efficiency.
DOGE. There's another acronym that I really struggled to pronounce, I think, and we're seeing some real cracks, I think, in the fabric of how government. Deals with data and security and privacy by the speed that that's going. And look very much to the developing story, but so much happening there.
Kris: Yeah, this could be a podcast on that alone. We'll try to be not so political and stick to the data elements of this, but I think it will be hard.
Anthony: Yeah. And I think we should come to that a little later, but also the world of AI, it just keeps rolling on new announcements from open AI. We saw the announcement of the DeepSeek tooling that has come out of China and a bunch of innovation happening there.
And we continue to see a whole bunch of stories about Copilot and how it's being used. There's certainly a lot of commercial applications occurring, but some good stories and some bad stories occurring in the last little while since we recorded last.
Kris: Yeah, it's been crazy. Certainly, I think there's been a lot of good stuff coming out.
People are starting to actually see some value. But of course, with every value story, I would say there's probably three or four others where it's like mistakes are being made or unexpected or unintended outcomes happen.
Anthony: So, I'd really like to kick off, Kris, and I'd like to pick your brains about Elon Musk and what's occurring with the US government data, and we'll keep the politics out.
This is really a conversation, I think, about the data and the interchange, but what's going on there?
Kris: Yeah, it's interesting is taking a very interesting approach to information that exists already. And look, I think ultimately, again, non-politically, you would want government to be able to say, hey, this is what we're doing with our information.
This is what we're doing with the money, the spend and that ability to actually come back to all of that data that is housed by these government departments, understanding and working through that. The memes, meme worthy, like, hey, I, I've got a spreadsheet that doesn't add up, we probably need to investigate that, is probably a little simplistic in the way in which they're looking at things, but reverse bringing in hordes of people and just taking carte blanche access to some very private information across the government is probably not also not what people want to be happening, you know, there's a lot of Process, I'm attending a conference in April, we're heading off to see air space, very, very strong defense conference, and a lot of the conversations in the lead up about, you know, the way in which to communicate the way in which clearances with, you know, the expectation, especially for a foreign national coming into the U S and moving into a, what is a very defensive military focused conference, the steps and the processes and the things that are in place that are just Part of normal everyday management of these information, even just the way you communicate.
I'm surprised it has to be said, but, you know, we were given a briefing this morning about, yeah, not allowing trades people and, and leaving laptops behind in hotel rooms and things like that. The bad actors are out there at all points in time. Yeah. In this instance, under the guise of government efficiency, they seem to be jumping past those hoops, you know, making sure that the clearances are there, making sure this access is regulated.
The term that people are using, you know, again, it's a little bit meme worthy, but, you know, that this might be the largest data breach in the U. S. government. Like, what are your thoughts there? Is it overblown?
Anthony: It's hard to say, but I, I think you, when you talk about data rich, you stray a little bit into the politics of the situation because the Department of Government Efficiency run by an Elon Musk, he's an appointee of the president to carry out a special government employee program.
I think the question is deeper though, around how data should be interchanged, because I think we can all agree that a lot of the data that is in the government should be publicly accessible and should be scrutinizable and we're not talking about personal data we're not talking about social security numbers we're not talking about birthdays or addresses we are talking about though how the government spends his money how thing occurs and there's been lots of initiatives you know Steve Ballmer had an amazing open government initiative and he built a series of businesses once he left Microsoft around creating that level of transparency and so I think transparency you know, In the system isn't a bad thing, and it's something that I think most people would encourage and try and see more off, but what we've seen is wholesale access to entire systems within the government apparatus and really unnecessary exposure of information that doesn't help make these decisions that, in theory, The government efficiency agency or DOGE is looking for.
So, what I'd wonder is why do we not have systems in place? If we believe a, that transparency is a good thing and B that we want to scrutinize and understand and look at the critical pieces of data where we could protect the key pieces of data but also carry out this process. And I think what we've seen here, and it's something I think we've talked about on this podcast for some time is the work hasn't been done to put those systems in place in the main. I think there are places that had been done. So, we have to be careful to say that it wasn't everywhere, but it wasn't easy to share that data. And so, what was given was just wholesale access. And that then leads to these ramifications.
So, I do wonder out loud, if would we be in a different place? And it's hard to say now in retrospect, if we put systems in that had some level of protecting people's privacy data, but you could see the rest that had some levels of redaction over the material, but it could be put into a database to be analyzed and run some AI over it, or that AI could look at that material and redact it with some homographic technique.
So, I do think it really opens up some interesting conversations around. What are the practices we want with this data and how do we make it effective so it can be shared and used by different stakeholders in different places but not end up in a place where we're wholesale doing exposure.
Kris: But haven't you just described the challenge of the industry?
Like ultimately, I read an article the other day and it was comic relief. Like it wasn't serious, but yeah, the roll up was we're trying to build a new standard for managing or sharing data. And then you'll have the conversation will be, but there are many standards for managing and sharing data. And now we have a new number, which is the number of standards we have for managing and sharing data before plus one.
and we've still not got to the same outcome, the same intent, which is we've made this easy. And then it's reusable because. I think that information governance in this instance is exactly what we're looking for. We want to be able to say, who should have access? What is actually there that all these same standards or the same outcomes apply.
If I know what I have and I know what's in it and I've got metadata or markings or tags, whatever you want to call them, that helps me to identify for every individual piece of item information, what it is and can be used for, and then I can create a process to then. Gain access to that for the right reasons, the right purposes.
And you can make those definitions. I mean, this is the definition of information governance, data governance at the end of the day. Is it where the industry has failed? Is it where the vendors have failed? Is it where the government's like, yeah, there's plenty of value here that they 30 trillion or three trillion, some enormous number in terms of the savings that they want to bring to bear.
If you gave me that money, I could probably build the standard.
Anthony: and get you there. Could I not? Well, I mean, I think here's some stats, right? There's 2. 3 million civilian workers in the US government. And in order to save that kind of money, they're going to have to have a reduction in force in the order of about 500, 000 of those.
So, a long way to go in terms of that process. But the issue is that, as they're working through that and looking just at the people aspect, because DOJ is looking beyond just people and then the programs and the data, you know, beyond the people, but we've drilled into the people aspect. You know, we've already seen a bunch of unions file cases that specifically around sensitive data, but we're not seeing, I think just yet.
And I think what we need to educate them on and educate the world on is the judges in those cases who are making findings that are blocking. DOGE from that data from the sensitive data head. I think there was a case that just happened with the US Department of Education, for example, they're just wholesale blocking rather than actually creating a recommendation, which says here are these processes.
So, I think it is up to us as an industry, as vendors to really engage with folks that aren't probably as technical, that don't understand how data can be redacted and managed and driven in such a way so that strategies can evolve. And I don't think we've done a very good job of that. No. And I think there is a lot more work for us to do that.
I do think in some corners that's being heard around this particular issue though.
Kris: Yeah, look again, take that political aspect out of it. It's drawing attention to the fact that data is actually the thing that we need to drive these efficiencies. Every business should be able to replay this outcome and say, what am I doing with the funds that I have?
And what am I doing with the data that I have? And what are the things that I could be doing that are more efficient? And then how do I become… how do I improve? Yeah, ultimately, this is all about improving product. Those are really cool statistics because it shows you that they're effectively saying in order to get those sorts of spending decreases and they're removing, you know, what's 500,000 or roughly 5 percent of the workforce, right?
So, here in Queensland, we had an election in the sort of the 2012 timeframe where that is exactly what they did. They came in and they removed 5. 6 percent of the workforce. Now, to be fair. There was only 250,000 people employed in the Queensland government workforce, and they removed 14, 000.
Interestingly, and while you were sort of saying that I quickly just checked, but it's, they actually back to closer to 258,000. So, there are over 8,000 more some 10 years later. So, it was a great idea at the time. He effectively got elected off the back of it. It was, you know, we're going to remove the fat in inverted commas from the government and they came in and did it as someone who lived through that here.
I think it was probably too much. There was a lot of missing services and a lot of inefficiencies that were created by that. Level of reduction, but they did it with no data. They almost put your finger in the air and see where the wind took you is kind of what it felt like at the time, remembering that at the time I was very much selling into that government and trying to help them understand how to get those efficiency.
So, I think maybe it's on us in industry and on us as practitioners and on us. As experts in inverted commas to be like, well, how do we show the efficiencies more? How do we show them that return on investment? How do we help them get to that place? So, I think it'd be really, really interesting to watch this as it moves forward.
Because as you say, government agencies need to share data all the time. They've been sharing data for evermore. You know, every one of these agencies has information that would probably be useful to other agencies just inside their own. Without having to go out and reshare or recollect or reproduce that information that all costs money and then reverse, how can they sort out and find those efficiencies?
And, you know, AI is really good at that. It is really good at helping you understand trends. It is very good at helping you to summarize the information that is there and at a pace that a data analyst cannot do, you know, for years and years and years. This would be a high paying job to come in and analyze a lot of data and come up with results.
And I can do it in moments, let alone the years that some of these programs would normally take. So, it's a great idea. I think the intent is very good. The implementation is something that we probably need to help with. I think.
Anthony: Yeah, and I think I'd really also call out, I think the technologists, but also the big vendors, Microsoft, Google, Oracle should be more here in this space, right?
I do think they really lead a little bit to be desired around helping people be more transparent, and that's not the core of what they're enabling. You know, I think we obviously as an industry need to drive towards that and have more of these conversations. And look, I personally have reached out to DOGE and to Elon Musk and asked the question of, well, if we assume the goal is reasonable and we're dealing with reasonable people, then can we not implement some of these things on the go?
Like, are there ways to resolve the conflict between the sensitive data and the outcome? And I firmly believe there is, and hence why we reached out. We'll watch the space and see if. They come back to us, but I think it is really upon us all to keep having this conversation because this isn't the only place this is going to take place as we roll forward, this is going to be a continual conversation in all sorts of countries and all sorts of places.
Kris: And again, the really interesting thing there is. First world nations, Australia, UK, America, Canada, the intent is good and the process while maybe flawed in the first instance, you know, has good intentions is it was probably no real balance in what they're doing. There will be nations where that intent will not necessarily be so good.
There are plenty of nations, which while there is government, will have data that will be used, resold. Useful the wrong purposes, you know, once I get the ability to do these things, I think that's, it's a really interesting point to sort of say, this will be something that is going to be reused and repeated.
If they have any level of success, even if they are only 10 percent successful, other governments are going to do this. If I have any level of success, every business should be doing this, but I want to pick up on the point you just made there, which is that the larger players here probably need to be more active.
60% of the corporate data in the world is currently stored in cloud data centers, 32 percent market share to AWS, 23 percent market share to Azure and 10 percent market share to Google cloud, for example. So, 60 percent of the world's data is stored within 3 vendors. If those 3 vendors came to the party and said, here's how we're going to help you mark that detail.
We're a long way there already. So, yeah, I do think that that's a really interesting point that. They are holding the data. They apply backup practices today. They apply all sorts of other practices today, you know, obviously commercially for their own benefit, but maybe there's a public good element to this conversation that these vendors could start to bring to the fore, especially as, you know, there's Gartner's trend is for this to be that 90 percent of the world's data will be stored in those sorts of data centers, you know, by sort of that 2030 timeframe.
Anthony: Yeah, probably switching gears a little bit, Kris, we talked earlier at the beginning of the program, and I know it's linked to this conversation at DOGE, but AI, there is a lot going on in the AI world.
Kris: Yeah, look, I mean, I've got three or four sets of notes here and maybe we can dive into each one depending on how we go for time.
But President Trump came in and revoked a bunch of things that Biden did, you know, specifically as it relates to this conversation, that executive order on AI. He did order, you know, a new one be developed, which, which is good, but, you know, just taking the other one away does sort of mean that, you know, at the moment we're in that vacuum of, well, what should we be doing?
You know? Whether there was at least some direction. I'll argue that point in a moment. He also appointed David Sachs who's a VC as the new government cryptos are and. Obviously, straight after that, we had the announcement of Project Stargate, so we've removed all of the regulation around AI and then we've sort of thrown what is nearly half a trillion dollars towards AI infrastructure, and I think that will be a very interesting topic for us to discuss, and then straight after that, and almost timely depending on which side of the conspiracy theory alley you lie, DeepSeek, the Chinese firm announced R1, you know, a model that's actually demonstrated performances like ChatGPT, but trained, you know, for far less cost, given that those investments, it was, it was very, very interesting.
And of course. We have our friends over in the EU end up shelving their EU liability acts, you know, which would have imposed, you know, requirements to pay compensation for people who actually got AI harm. So, in that very, very short period between when we were last talking in December through to that probably really only covers up to sort of middle of February.
There's an awful lot that's gone on there and sitting here in the space as a practitioner trying to give good advice. And I know I'm speaking, you know, in a number of events through April around what should people do be around AI. This is a moving feast, and it almost feels like a little bit of move away from regulation, especially in that US space, but continuing investment, right?
Anthony: Yeah, I wonder if that's just a media view, right? Because. The Biden executive order that was repealed and, you know, even the Trump replacement one, if you unpack that a little bit, which is the media is talking about less governance of AI and less regulation. And certainly, I think there is a theme going on around that.
But the Biden executive order had eight core principles. The eight core principles. That need to be safe and robust and reliable and repeatable and there was standardization of policies institutions, you know, take easy the US should promote innovation competition and collaboration was the second principle the third principles around the responsible development of a committing to making it unbiased but also supporting American workers in that process and making sure that the impact on labor.
Was not undue that the policies again were consistent with the norms of today around bias and equality and civil rights that the number 5 was the that need to be sparingly or and they, you know, that you need to be clear if you were using it and in your personal life that you would be protected. That your privacy and civil liberties number six which was really the one we probably spend a lot of time talking about and if you're googling this Kris you feel to check on me but I was really around protecting the collection use and retention of data and making sure that was lawful seven was really the federal government's own use of AI so just regulating how it was going to use as a principle and then the last point was really around how the federal government should lead the way in society economic and technical change.
But if you actually look at the guidance from the White House and what Trump said, it was all around and the direction he gave in his order, it took away some of the clear guidance in how federal government should embrace and use it, but it really had a lot of similarities in terms of the AI Action Plan and the summary that has been asked to be built to really enhance America's AI dominance.
Led by the president of science and technology and the White House AI and crypto czar. So, there was some clear processes. Clearly there's a much more bias in this administration to look at removing the barriers to the growth and use of AI, particularly American developed and us developed AI, but really still had some language in it about human flourishing within an economic competitive framework. So, to me, there's a lot of similarities there. And I wonder sometimes if trying to look at this from a clear-eyed perspective, if we're just sort of going through different administrations in the U S and there's just different ways they want different lenses they want to apply to this.
So, I do wonder, we'll see, maybe. Yeah. Still a bunch of regulation just in a different form that falls out of that, that maybe not as strident as what we previously had from the FTC, the Federal Trade Commission, but certainly a continued evolution of governance. And I'm not sure it said in any of that in anywhere that yes, there is a push to remove red tape, but there's nowhere where it says we're not going to try and govern this still.
Kris: Yeah, look, I think I agree. And certainly, the reaction from states has been to start to look at their own camp. And I think, again, we know that that's the same has been for the privacy space. There is no federal privacy law. The states are the ones who actually do a lot of the heavy lifting.
And I agree that the action plan using those exact words was focusing on promoting that human flourishing and economic competitiveness and of course, national security as well. Very much was, I think, business first promoting that ability to, you know. Get on with it, be competitive, you know, the, I guess almost the announcement from DeepSeek was probably the same thing there.
So, we, we need to make sure that the, this is your, your moon landing. This is the space race. But from an AI perspective, come on USA, let's, let's get on with it and let's get effective and get good at things that. I don't believe that, you know, those ethical safeguards were completely removed. I think those things are absolutely still in place and certainly the individual states will have their way of dealing with those things very specifically.
Again, there were no mention of, you know, reducing civil rights or, you know, removing jobs or all those sorts of things that are absolutely pro. I think the, even the labor experts sort of said, you know, the rollback around the safety of labor protections was raised as concerns, but, you know, there's lots of value.
Anthony: Well, it's just unknown yet, right? We haven't seen the next stage. You know, what I think is interesting is clearly there's going to be an investment in the area. And we've seen with Project Stargate, and we've even seen with DeepSeek again, you know, without getting into the different politics of different countries that the environment is one where, depending on how you want to define governance, it's a big topic for AI and it's not just about protecting sensitive data when we talk about governance. It's also about making good models and making AI more effective. And really underneath that comes back to our continual conversation on this podcast around data governance which is the key feed way that the higher the quality of the data the better the quality of the outcome is of their model, and I think that's super interesting when we start to talk about DeepSeek vs Gemini vs those other processes.
These are highly curated models they're not the models that you gonna say in the average corporate. Institution there's a heap of data scientists working on them doing a whole bunch of waiting's in them part of the project stargate is for federal government in the US to be a take more advantage that build its own capability.
But the key thing that I think most corporates are gonna have to go back and focus on is how do they make the most of this utility as it rolls out there. And how do we take advantage of these different models, bringing our own data to the fore and making sure it's clean and ready? And that problem hasn't gone away.
Kris: No. And I think that comes back to even this, the beginning of the whole podcast, which is. Don't have just this army of people going and looking at all of this data were unable to really make the most of it. Imagine if it was tagged well. Imagine if it was classified well. Imagine if it was, you know, how you have a good understanding of what it was used for and what is in it in terms of its intrinsic value that would help you to know what to do with it next.
You know, every one of these models that have been built, and I know that, you know, OpenAI more recently, their 4.5 is struggling with some of the things that 4 was very, very good at. And it's just, again, what are we training for? What is the data being used for? Having access to more high quality information will lead to, you know, that higher value outcomes and straight good records management, data governance practices give you that and I'm not sure that there's enough people screaming that from the rooftops in these organizations. Because, you know, interestingly, as we talk to people in industry and even more so yourself and you're talking with people in the boardroom that understanding that it's not just, oh, I flick a switch, and I get a great answer.
It is data scientists, you know, looking over large volumes of data. And, you know, if it's large volumes of high-quality data, the high-quality outcome is absolutely going to be better. So, I think that there's all of the ethical concerns of making sure that I'm doing the right thing from a privacy perspective.
All of those things don't go away. But just having a good handle on your information in general is going to give you that leg up, that advantage as these models get better and better.
Anthony: Absolutely. And that continues in those different frameworks. One of the things that interests me that did happen through the period, which, you know, a lot of people have been pointing out, and I think this also comes back to the question of how people find utility.
Was that the European Commission didn't continue its liability act or hasn't progressed much further up and continue more correctly. And really the notion of that process was that there was some redress if you had inside of that I should say more correctly there was some liability rules to redress issues where AI caused harm and effectively started to put some economic value. Now that's kind of paused a little bit, which I think is probably, I had people asking questions then of what are the penalties for not doing the right things, but it's super interesting that even we're seeing in the EU, this evolution of the landscape, it's getting really difficult, I think, for legislators to just think about this as a purely regulatory process. And, you know, the EU has some much stronger AI legal framework in place, but it doesn't quite have the penalties there yet.
Kris: Yeah. And I think maybe that's sort of tied a little bit of this broader trend of being more competitive. Like the EU in general has a lot of regulatory frameworks, GDPR being obviously one of the more recent ones that sort of aligns to, to where we live.
And there have been lots of very big fines handed out, but there's also a lot of burden on business here to meet these requirements. And so, yeah, trying to create a regulatory environment that, you know, protects the individual user. Ultimately, they're all about protecting the individual user. That, that's the reason behind this, but ensuring that you're able to leverage those things that you, they don't want to be too far behind in terms of competitiveness, productivity. And I know here in Australia, we've got that productivity problem. It's, we've dug lots of rocks out of the holes and those holes out of the ground. And those rocks are worth a lot of money, but we're not producing additional value from them.
The productivity off of the back of this is one of the reasons why the government here is looking at. Climate and climate science. And, and how do we have new industries around solar and battery and these alternate energy generation and alternate energy capture and storage things that there are, this productivity be had there for our economy.
And I think, you know, the EU has a very, very similar thing. Some of the articles that I read around this, you know, they were very much talking about the committee's decision was very much aligned to a broader initiative to streamline regulation and remove that bureaucratic burden on business. And it wasn't necessarily that now is the right time.
Perhaps GDPR, perhaps good data governance will give them enough of an ethical background to be like, well, do the right thing by the data and therefore you'll get these good outcomes on the business side. I'm, I'm very much paraphrasing and putting my own spin on that. There's lots of thoughts there, but I think, yeah, we've got to be careful because that's not really a gold rush per se as, as it relates to AI, but there is economic benefit from being more efficient. There is economic benefit from using these new technologies and you wouldn't want to restrict organizations too much. I know. Anecdotally, one of the really interesting things was that last year's IAPP event.
And, and I know that was sort of, you know, probably before a lot of this, but last year's IAPP event, I spoke with a chief AI officer in what is a very large and but upcoming business in the United States. And one of the interesting things they were concerned about was how they do this ethically, that he'd set up all the programs and governance in place.
He'd go through all of the processes, but if you want to use AI, you'd need to come to the AI board, you need to present your case, and we will make a decision whether this is an ethical use of AI or not, but you've got all the right processes and things in place. And his feedback was that the biggest struggle wasn't that people weren't trying to do the right thing.
It wasn't that they weren't trying to put the right processes in place. They were doing all the bits and pieces, but no one could guarantee what was in the data. And they were asking very, very simple questions of the people who wanted to use it, which was, what's in that data and how do you know that it's not harmful?
And we just came all the way back very quickly to data governance. And it's like, well, we're not doing the simple thing.
Anthony: Yeah.
Kris: So, perhaps as it relates to the, these new regulations from the EU that they sort of had a look at that and go, well. We've got a very strong set of regulations around data. Should we be looking at that more?
You know, how do we actually leverage it inside of that? Because they're quite happy to dish out fines on the GDPR side. There's plenty, plenty that have been handed out over the years. And it's the same. It's, you know, just do the right thing by the data and you're going to get this value. And there is that, you know, that competitive edge that can be taken and the economic productivity element of this.
So, yeah, it's interesting, you know, again. As the vendor having regulation, you know, coming back into the market and talking to a customer, it's easier to go, "there's the thing you have to do. It says so here in these laws. I help you make that regulation. Why don't you buy?" But that's selling fear, don't do the wrong thing.
I think in the AI case, and even in the data governance case, now there is such a high value on just having access to good data as an industry. We should be in this place now saying, "hey guys, you know what, all that stuff you want to do with AI, all of these values and these benefits that you want to get? The quickest way or the fastest path here is to have a great handle on, on your data," and we probably don't need any more regulation than that. Like this is one of those things were, as an industry, we should be extolling those benefits.
Anthony: Yeah, look, I'm not sure I completely agree with, “we don't need more regulation.” There's bias and a bunch of other things that get quite interesting.
But I think your point is well made in terms of, it's sort of like, I think a lot of the conversation is that. You know, growing up in Australia, as I know you did Kris, Australia has a lot of unsealed roads and you can absolutely make your car drive on unsealed roads, but you shouldn't go too fast because when you need to slow down, you get a bit slidey.
And the reason that we have these nice, tarred roads is it's a better experience for people carting stuff around. The roads don't need as much maintenance. You know, one of the things that happens with those unsealed roads in Australia is they develop lots of ruts in them and it's quite an unpleasant drive.
So, we have these sealed roads. I think AI right now is actually on the unsealed road mode, right? So, it's built on top of, yeah, your car can go on, it can go fast. It's not the best ride. It's kind of bumpy. You probably don't want to transport a lot of goods on them because they probably get broken up. And you really will struggle if you need to slow down because you're not really sure whether you'll shoot around the corner or shoot off the road.
And I think what we want to do is really get the data up to a point where it's sealed, and it's known and it's in the GPS and you can use your Google Maps and all this infrastructure that comes with these beautiful, sealed roads. So, that's the analogy when I talk to people is, look, you need to think about the fundamental building blocks you're building on top of you've got to have that foundation and that comes with a little bit of regulation because when we talk about roads, we all decide what side to drive on, we decide what speed we're going to go at, and then we police these things to make sure people are doing the right thing, but we let those roads function and we let people have at on the road.
And I think that's what we want is these wide-open lanes of AI, but just some parameters around how we make that work.
Kris: And let me double down on that analogy with an example of real life having, you know, only the last 12 months driven from Brisbane to the three corners, which is the three corners of Queensland between New South Wales, South Australia and Northern Territory.
We had 1 vehicle in a group of 10 that came with us, that vehicles worth hundreds of thousands of dollars had all of the work done to make it such that it could drive across those rotted unsealed roads. I took my very, very stock Isuzu with me and. We managed to make it back to, to Brisbane safely after 10 days of driving some 4,000 kilometers or for our non-metric friends, close to two and a half thousand miles.
Anthony: 2, 600, right?
Kris: Close to two and a half thousand miles. Quick, quick math. It was the very expensive vehicle that Lost a suspension control arm in those ruts. And so even with all of those, that money invested to avoid those problems, those problems still occurred. And, and, you know, so yes, absolutely, we could spend a lot of money to try to get there, but we could also just do the right things, seal the roads and make this far easier for everybody to be involved.
And I think that that's a great analogy.
Anthony: So, I think, I think it's really good to be back and talking about all of this and, and there is so much going on. We're going to unpack a whole bunch of this over the next sets of episodes. We have a, a big lineup plan, you know we really want to drill into what's happening in the US states. There is a patchwork of legislation going on out there in terms of AI, in terms of data governance, and it really is a topic that's not going to go away. I don't think.
Kris: No, absolutely. And I think we'll probably double down this year on that conversation around the best way to achieve strong AI governance and get the most benefit is going to be through strong data governance.
I think I'm really looking forward to some of the guests. We've got planned. We will keep that close to our chest for today, but. There's a bunch of laws out there and other regulations out there. We don't necessarily need to be looking to not do anything more than help organizations manage their data better that will give them these benefits around AI.
And I think, you know, there's, there's going to be some great conversations this year around that.
Anthony: Thanks for listening. I'm Anthony Woodward
Kris: And I'm Kris Brown. We'll see you next time on FILED.