2
AI regulation vs. privacy regulation with Paul Sonntag
What should companies be doing right now to get ready for future AI laws?
If you comply with privacy regulations, are you set up well for AI laws when they are inevitably passed? How big an emphasis should be placed on compliance, over – say – ethics, when it comes to AI?
To discuss the answers to these questions, Anthony and Kris are joined by Paul Sonntag, a 15-year veteran of the security, privacy, and compliance space, and co-author of The Relentless Innovation Model: Build better products, services, and more resilient business through continuous invention and innovation.
They go deep on AI from a technological, business, and legal standpoint, discussing the best ways for companies to leverage AI safely.
To make these tricky decisions easier, Paul shares his tool for negotiating these questions, what he calls the “Goon Test.”
They also discuss:
- AI regulation and privacy concerns
- The risks and challenges that come with AI implementation
- The global AI regulatory landscape, including the EU AI Act and various initiatives in the US, UK, and Australia,
- Is privacy law as it is currently written suited for the AI age?
- The future of AI and the impact of market pressures
Resources:
- 📨 FILED Newsletter: You don’t need AI laws to establish strong AI governance
- 📨 FILED Newsletter: AI regulation begins to bite, just in time
Transcript
Anthony: Welcome to FILED, a monthly conversation with those at the convergence of data privacy, data security, data regulation, records, and governance. I'm Anthony Woodward, the CEO of RecordPoint, and with me today is my co-host Kris Brown, who has a new title, I think the first for the podcast Executive Vice President of Partners, Engineering, and Solutions.
How are you, Kris?
Kris: Mate, fantastic, and I just can't wait to hear you mangle that for the rest of this series, but no, very exciting new role here at RecordPoint, and really, really looking forward to our focus here on FILED this year.
Anthony: Yeah, look for the listeners who are looking at my run sheet in front of me, it does say executive, executive vice president.
And I was just before the podcast accusing Kris of adding the extra executive there just to trip me up in that title. But it's exciting to get onto today's topic, I think.
Kris: Yeah, thanks, Anthony. And of course, you know, it wouldn't be me if I wasn't trying to trip you up. Today's focus is going to be on AI regulation and privacy regulation.
And, you know, what's the connection between them? I'm super excited to talk with our guests today. And we really want to understand, if complying with, say, privacy laws is setting you up well for those new AI laws when, they're inevitably going to be passed in your region or your jurisdiction.
So, to talk with that subject we've got Paul Sonntag, a 15 year veteran of privacy, security, and the compliance space, a co-author, and now it's my turn to try and mangle a title, Paul. So, I do apologize. The Relentless Innovation Model: Build better product services and more resilient business through continuous intervention and innovation.
I think I got through it.
Paul: Yeah, nailed it.
Kris: Great to have you on board, Paul.
Paul: Glad, genuine pleasure to be here.
Kris: Excellent. Fantastic. For our listeners, do you want to take a moment just to give us a little bit of, you know, who, who is Paul?
Paul: Yeah, absolutely. So, yeah, I've been in the compliance and security space for about 15 years.
I was a consultant of some stripe or another for more than half that time. I made my bones building PCI compliance programs and did that for a number of years. Then I switched to the other side of the table, and I worked as a QSA for about five years working primarily with mid-market and enterprise clients.
Did some assessment and program development work in the healthcare space. At my previous employer, I established a privacy practice. We did program development and compliance assessment work for GDPR, CCPA, CPRA in the United States and a few others. Somewhere in there I got interested in organizational psychology thinking about how people work within groups and how focusing on that
can produce better outcomes for your customers and just relying on emergent behavior and that works really well for compliance and governance as well, a lot of the same reasons. So, it's been an interesting ride and now here we are, my 3 year old laptop can pass a Turing Test it's the next major tech disruptions upon us.
Anthony: Absolutely is. And what a great place, I think, to kick off the conversation. Yeah, really building on that Turing test. You know, we're seeing an explosion of every time I turn around and talk to anybody, AI is the only words they seem to actually want to announce. I was at a dinner last night with a bunch of execs here in Sydney and Australia.
And everybody was talking about how, you know, what jobs are going to be replaced and how are these processes occurring, but I’d really love to drill into this. We're starting to think about AI and the application of this new innovation. What are the kind of risks that it presents? Because it's not just a privacy risk.
You know, it's a lot of conversation about how do I keep people's social security number out or address details? And that's a relatively simple set of processes, but it's more thinking about the other elements here.
Paul: Absolutely. AI is extremely interesting from a risk perspective because it's capable of doing as much damage, spectacular damage to the organizations that are deploying it as it is to their customers.
So, first off, you have a broad range of opportunities for brand damage, which I think is really important. So, AI, and just to kind of constrain the scope when I say AI, I'm talking about Gen AI specifically, right? This technology is non-deterministic. Meaning that we can't reliably predict how it's going to behave, what it's going to do, what kind of information it's going to disclose, or what it's going to say.
If you are deploying this technology in a customer-facing way, you know, it can produce nonsensical or libelous output, right? It can tell your customers to eat rocks or sell them a truck for a dollar. You know, it can give them bad or incorrect information about your product offerings or your commitments to them, you know, what your contractual obligations are, can say bad things about your competition, possibly. They can be manipulated into producing offensive material right there with your brand on top of it, and it can reinforce your customers perceptions that you're not really interested in their concerns or their experience with you or their product, right?
So, talking to a chat bot. It's kind of like being stuck in a circular phone tree, only ever so much more so. Customers are not fooled. They know that you are trying to deflect them by having them talk to this piece of technology. And that's never good for customer experience. On top of that, we've got a lot of labor related problems, right?
So, a lot of companies are trying to reduce staff costs by replacing skilled employees with AI in areas where AI may be not necessarily proven. And there are, of course, a lot of quality issues with that. There are a lot of widespread fears about job stability. Well, okay, not even really fears, right? So, there's already a lot of job loss in the content and media industries, right?
So that is a real thing. We've got a number of cases in which the use of AI is amplifying existing gender and racial biases in job hiring and employee performance issues, right? So, there's a lot of potential legal pitfalls there. Moving right along, we've got social and cultural concerns around unresolved ethical and legal problems with unlicensed appropriation of intellectual property to train the big foundational models.
Those questions are still making their way through the courts, and they might result in some kind of nasty surprises for organizations that are using AI today. And we have concerns about proliferation around low quality content, right? AI slop, right? Everyone's had to shovel their way through that. You know, and how easy AI makes it to develop and deploy really high quality disinformation, which is, you know, kind of terrible for the social fabric.
And then finally, you've got operational risks. You know, again, you can inadvertently disclose sensitive proprietary data. They can provide erroneous information to your workforce that can lead to planning mistakes or the decision making problems that can harm your company, or your customers.
And they're expensive and they're disruptive. And every dollar that you spend chasing the AI train is a dollar that you're not spending on something that might have a genuine operational improvement to your company. So, it's interesting, it's powerful. It's got a lot of potential, but I don't think we have a clear understanding of what that potential is relative to the risks that it brings with us.
Obviously, regulators are working very vigorously to get ahead of this stuff with very mixed results, and I think that organizations that are just kind of banging along to ride the wave, might have some nasty surprises and some expensive cleanup operations in the future.
Kris: Yeah, thanks for that, Paul.
And it's super comprehensive start there. Like I said, I had a bunch of notes that I took before we had the opportunity to call, to have this chat. And it's interesting that when you really sit down and pick out the individual pieces that, you know, AI is Proliferating through a business and the way in which you can interact and potential as you said that it can provide in terms of that upswing for a business. That ability to do things faster, be more efficient, be more cost effective for every one of those.
There's that other side of the coin where it's like, well, hey, there's there is a risk here that we do something silly. For years, even just being in the information governance space, it's always been about good decision making and having access to good information helps with that, and therefore, you have that non-regulatory, non-compliance ball in your court of, you know, I'm not selling fear right now, I'm telling you that there is a benefit to having access to good information. And now here we are saying I'm going to just drag all this information out of AI, I'm not necessarily sure it's good, right?
You know, I'm going to make decisions. It can be sometimes as simple as that. And I think that's it was a great summary. I want to touch on a piece you sort of mentioned there towards the end there, which is, you know, organizations are looking, how do we do this the right way? Countries are regulating, and so countries around the world have developed, a number of AI regulations.
Obviously, there's the EU AI Act. There's been, the Executive Orders out of the US, UK and Australia have had some as well. Obviously, a number of states have done things, but what do all these laws have in common? And if we start to talk about actions, what are the priorities for companies that want to start to comply with that from your position?
Paul: Oh, right. Absolutely. The regulatory wave has been huge, you know, and it's important to think about when you're considering regulations and how these things develop, this all kind of comes from political opportunity. The hype wave around this makes it extremely popular to be able to say smart things about it.
Right. To be able to get into some press and talk about it. So, we're going to see more and more of this regulation and, you know, here in the United States alone, I checked this morning, and it was about 120 bills that are currently making their way through the state legislature. So that's huge. On a range of things from like consumer opt out for decision making things that touch on bias in finance law enforcement, use of AI for criminal investigation, AI generated material affecting campaigns.
Obviously, the EU Act is also providing some things to consider around risk classification, certain prohibited uses for the AI and those kinds of things. So, there's a lot of stuff going on, but if you squint. There are a number of common themes amongst all these, and those kind of boil down to protection of intellectual property, avoidance of bias in finance and law enforcement.
There are rules around biometrics, establishing data subject rights around automated decision making and reinforcement of existing privacy requirements. Look, you don't get anything for free in this business, but a healthy privacy program should already have the hooks for some of these things already.
As a preparing for compliance, you know, it's going to require the same habit of continuously looking around corners, trying to understand what is happening and being able to dynamically change things on the ground, but none of this should really be terribly surprising to anyone who's, who's been doing this for a while.
Anthony: Just coming out from the other angle, though, when we start to talk about regulations and we talk about what's been rolled out in the States and thinking about privacy programs, one of the concerns, the other risk I'm concerned about is, you know, the beauty of Generative AI and the broader AI is that it is a true disruptive innovation, almost to the classic Clayton Christensen definition of that. So, we are creating a bunch of new value networks and a bunch of new markets that are coming out the other side of this. How should we be thinking about regulating in a space where is unclear what that newness is?
And how do we set ourselves up to be able to take advantage of those waves in your view? So that we can then operate within the right sets of ethical and almost moral sets of processes we need to have out there but also take advantage of these new markets.
Paul: Right. Well, I mean, I think if you're thinking about it from like a regulatory perspective, I mean, just generally speaking, you shouldn't be chasing regulations.
The law is slow and it's highly reactive and you shouldn't be, you know, you use the word ethics, right? The ethical or the ethical considerations here. And that's really the core of it. You shouldn't be looking to the law to kind of draw a box around bad behavior or tell you, how slimy you can be. What things can you possibly get away with, you know. The current regulations and a healthy focus on customer needs and expectations that ought to be enough to guide your behavior.
So, when I'm thinking about this kind of stuff, especially really disruptive things like AI that have a lot of potential regulatory disruption that goes along with them, I like to use something I call the Goon Test. And the Goon Test states that if you wouldn't or couldn't hire goons to do a thing, then you shouldn't use technology to do that thing.
So let me give you an example. Let's say that you operate an inpatient healthcare facility. Just to make it interesting, let's say that it is a hospice care facility. Okay, so you have a lot of patients who are terminal. They can be presumed to be psychologically brittle. You have caregivers that probably are also interested in psychological stress, so those are nurses, doctors psychiatrists, and those kinds of things.
These tend to be fairly expensive operations, and so there's always concern around operational facility or operational efficiency and cost control. So, let's say that you had this great idea about trying to find places where you can cut costs, right? You can maybe reduce staff. You can be more efficient.
And so, you hire goons, right? Big guys in black suits with blank expressions. They never speak. And there's one for every employee in your organization. And they follow everybody around with a stopwatch and a clipboard. Nurse goes into a room. How are you doing, Mr. and Mrs. Smith? The goon whips out the stopwatch, clicks it.
The interaction happens. As soon as she's done, he clicks it again and starts writing noisily on the clipboard. It goes to the next patient, same thing. Goes to the bathroom, same thing. Goes to the break room, the same thing. And at the end of the day, you have all this glorious data about who went where, how long they were there, and what they did.
Alright, so the questions you need to start asking yourself is, would that be the right thing to do for the patients? No, it would be terrible, because it's going to reduce the quality of care. It's going to upset them. It's going to make their families mad. Is that going to be the right thing to do for the staff?
Well, of course not. They're going to be under a lot of stress. They're going to be again reducing quality of care. You're probably going to see turnover. You're going to have retention issues with your best, you know, with your best people and so on. Would that data collection be legal under your current privacy and labor laws?
Could you actually collect all that information and use it for things like Employee performance evaluations and that sort of stuff and you know, would you like to explain why you did that to your board when that hits the media and gets splashed around? If the answer to any of those questions is no, then probably you should not be used.
You probably shouldn't be doing things like collecting everyone's location, using GPS from their phones and using AI to produce a histogram of who does what and how much. Right. And you honestly, you shouldn't need to have a regulator slap you on the hand with a new law that's AI-specific to be able to make those decisions.
Because again, it comes down to understanding what your customer needs and what's really kind of ethical and moral behavior.
Anthony: I love that goon test. That goon test is fantastic.
Kris: I've just stolen that just so we're clear but I'm labelling my goons, Crusher and Lowblow, from straight out of The Simpsons. That's, it's instantly where I went.
It's like, hire goons.
Paul: It's all suits and t shirts.
Kris: Yeah, absolutely.
Anthony: If we could spin it around a little bit, cause I love the layering there of the goon test and that ethical framework for folks to think about, but I think it gets really difficult when you start to talk about data and you sort of touched on that in your answer, it's kind of easy, that scenario, you know, in a hospital is just clearly wrong, right?
Like, it doesn't make any sense to behave that way. But if I've got data and I know the provenance of that data, my utility of that data are much deeper and harder ethical questions, you know. So, what can we be doing from a system perspective, from a process perspective, you know, even potentially touching on what's happening with the Department of Government Efficiency and things in the US at the moment.
There's a lot we can be doing in this space to think about data, isn't there?
Paul: Well, there absolutely is. Yeah. But again, it's the same kinds of tests. So presumably you have a large library of data that you've collected for certain purposes, right? So, let's say it's customer data.
Ask yourself, are we going to use this information in a way that would make our customers upset? Is somebody going to show up at the door with a hammer if I do this, if this were my data, if this were information about how I behaved, would I want this to be used? And so, in a lot of ways, there are some complex ethical questions to be asked, but I think at the end of the day, when you have personal data, you don't really own it.
I know there's there are a lot of a lot of fruit fairy beliefs to this, but it's, you know, if it's my name, it's my address, it's my social security, I'm sorry, that's my data. You're just a steward of it. And so, you need to recognize that that that gives you certain moral obligations. All right.
And again, if they're the fact that you might be able to get away with it, you know, our current office of government efficiency being a prime example of this, the fact that you can get away with it doesn't necessarily mean that you should be. If nobody's going to see you break a window, that doesn't mean you get to pick up a rock and go to town.
Anthony: If we look at it from a different angle than around some of the behaviors we've seen in the market, you know, pixel tracking, some of the issues that Google's had. That data came from providing a utility. So it wasn't that the data itself was constructed or the thought process of the designers of that were thinking about how this could be used downstream.
You know, they were just thinking about what the utility is. Now, obviously, we would like everyone in the world to think holistically about every decision they make every time they make them. But that's not reality, right? People, I think, genuinely are good and genuinely just trying to create an outcome.
What are your thoughts around when we think, you know, and a lot of that data in different ways has been fed into these large language models and Gen AI and, you know, I was a very early poster into Reddit. I had no idea that my Reddit posts were eventually going to go into these models. What do we do to backtrack that?
Because it seems really hard to unwind and apply some of these ethical frameworks to it when the genie is already out of the bottle.
Paul: Yeah, and unfortunately, that's a very hard problem, and it creates significant operational risk for using AI. There's the whole idea,
I think this is apocryphally attributed to Tim Cook, that if you're not paying for a service, you're not the customer, you're the product. And that, of course, is nonsense. It's absolute nonsense. And the reason for that is, if you are using something like Reddit, right, you don't pay a Reddit bill.
If you're using Google, you don't have a Google statement on the end of your credit card every month, right? You are providing value to that, right? So, you're providing value to Reddit by making comments. That increases the, you know, the value of the community and so forth. You know, if you're using other services that are ad supported, you're consuming those ads.
They are, you know, tracking some information around you. So, the behavioral information support, that is a value exchange. And so, in terms of, I mean, there, there's no way to fix this afterwards. You can't go and remove information from an existing model once it's been consumed and trained.
Right? So, your Reddit contributions are now probably on my laptop in some statistical way. But I think what has to happen then is you really have to consider how those products are used going forward. And I think we're going to see kind of a combination of probably some regulatory impacts in terms of how like creating new intellectual property with an IP or with a large language model, we may see that that can't be copyrighted or it may not be, or the copyright may not be defensible based on how much was used.
So, you can't fix it, but the products are probably going to be where we'll see that focus and the only place realistically, we can focus on it.
Anthony: Do you think with us being ex post facto on this occurring, do I still have a right to have that changed?
Cause if it's my data, like we sort of established earlier, and I think it's a fair point that, you know, anything that describes me as metadata about me is mine. So therefore, why do I not have a right to go get that? And I know that's an impractical problem, but where do you see that occurring?
Paul: Well, yes, you absolutely do have that right. Now, whether that right will be enshrined in law is an open question. And whether there are any technical means by which you can exercise that right is also an open question. I think the answer to that is no, there's probably no way, but I don't have any recommendations on this, right?
If you're deploying these models, there's nothing that I can say that, you know, if you do these three things, you will solve this problem going forward. You know, I certainly wish I did. But I think that that is part of the trust question here when we're thinking about kind of the tension between rights, between natural rights, ownership of data, and then what ownership or rights you may have for any information that comes out of that LLM based on what you produced.
There's no good answer to it, unfortunately.
Kris: Yeah, I think you've sort of touched on where I was going to go with the next question, which is, there's no way to put that genie back in the bottle, you know. I'm so very glad that I grew up in an era where there was no Facebook or digital cameras and I wasn't walking around with a high quality camera in my pocket with all of the silly things that I did as a child, teenager.
Young, adult, probably going into my thirties, actually, now that I think about it. There was an article that I read, a paper that I read in the lead up to this by Daniel Solove, which was like, you know, he was arguing that the privacy laws, you know, as they're currently written, are not suitable for the AI age.
And I think you've sort of touched on that a little bit just now, Paul, where it's like, will these rights, you know, that we believe to be true, will they be enshrined in law? Yeah, to give one example he was talking about that idea of individual control, which Anthony was sort of just touching on now, that complexity and scale of AI means that we don't have that ability to control that data.
It has gone. I'm now asking you probably to look into a crystal wall and go, you know, you're providing advice to teenage you. You've been handed that phone from mum and dad, be the customer, be the consumer, be the constituent, be the voter, be all those things as you work your way through life.
Do you have advice there? Because I think we need to take some individual ownership here, is where I'm going is that, you know, what do you recommend there around well, this is my data, how should I treat it? Because as Anthony said, yes. And I think you agree too, is by posting on Reddit, I'm providing opinion, I'm providing value, they are getting something in return. What are the issues there that we really should be educating at an individual level, maybe even all the way down to a schooling level? Maybe not the toddler watching Bluey, but. But they are on YouTube, and yeah, their trends are being watched.
Paul: Oh, absolutely. The Internet is in ink, as a friend of mine likes to say, and I think he stole that from someone else. Yeah, it's tricky. So, when we think about privacy, we think about personal data, kind of the old idea behind that was, this is my data, these are my documents, they're mine, and you shall not have them.
That's not practical anymore.
Kris: Sorry to interrupt, but even in my own world here, in the information governance space, I've been advocating against that for an awful long time. You work for a business, you're writing these documents, you're writing these emails. Those emails are the corporation's corporate knowledge.
And the organization should help you to protect and manage them properly.
Paul: Oh, yeah.
Kris: There is, there's a bit of a dichotomy there in that sense of, well, hey, I'm telling you one thing in this instance, but I'm telling you something different in another. So, there is that tension there as well, right?
Paul: Oh, yeah.
Yeah. And you're absolutely right. So, the things that happen in a corporate space, right? And it kind of comes down to value exchanges again, right? So, if I'm working for a company, the company has certain expectations that I am going to create value for them and they're going to give me value in terms of my pay check and my insurance and compensation package and all that kind of stuff.
So, there is that piece. But thinking about like individual participation, in just modernity, right? So, privacy used to be about protecting access to your stuff. Personal information is now not something that you hoard away. It's effectively currency, right? So again, using Reddit, using Google, these kinds of things, right?
You're paying in this kind of currency. And so, the advice that I would give my younger self. Well, actually, the advice I would get my other self is pretty rigorous and probably doesn't belong here. But you need to treat personal information like what it is and what it is money. You don't give away money, you don't. If people say, hey, you know, let me, let me just go through your wallet and pick whatever I want and I'll give you, I'll let you look at this magazine or whatever it is.
Right? So, you have to understand, and part of the problem is, you know, in using Google products, I don't know what I am worth to Google. I don't know, in terms of my personal profile, I don't know what demographic I fit into, I don't know how much they charge someone to show me a particular ad on a particular site but I can guarantee you that Google can tell you down to the penny what someone in my particular advertising profile is worth, because that's how they do financial forecasting.
So, there is kind of a power imbalance between the user of these services and the provider of these services as was ever thus. But when you are interacting with these things, you have to understand that you are in a value exchange. You're not getting something for free. And so, you have to be able to make a personal decision about, you know, what am I going to disclose to this company?
Because once they've got it, they've got it right. So, we can talk about purpose limitation. We can talk about data subject rights and all those kinds of things. And those are great as far as they go. But at the end of the day, the company is going to do what it's going to do. We can't really control that.
And so, you just have to be able to understand that tension, as you put it between what you're getting and what you're giving.
Kris: I love that example of currency. It makes it very, very simple then to explain in the sense of, you are paying, it's not free, you're paying with a currency that is your personal information.
And to be honest, AI then exposes and exacerbates that tension between the privacy laws that are out there to protect. And as you say, data subject access rights and other things are a wonderful thing, limiting of the purposes like GDPR, absolutely, calling out what this data will be used for.
But there are laws that say what this data is worth, and I think that's a really interesting piece there.
Paul: I would love to see, I saw some students a few years ago that were working on the idea of a privacy label, and I know that some regulations have picked this up, but the idea that you could, this particular product or service or whatever it is, collects this information, they do these things with it.
You know, in theory, you should be able to see where they transfer it to and all those sorts of things, but it would be interesting. I have no practical idea of how this could be implemented, but it would be interesting to have a way of setting a fixed monetary value or an exchange rate on a piece of personal information because I think that that would make the relationship between customers and these companies far more equitable than they are today.
Kris: We couldn't imagine, you know, only a few years ago, how easily it would be to use all this data to have AI do these things though either. So, there's the want and need and the desire to build those is probably where maybe regulation needs to help, but I'm not pro regulate for regulation sake. Who's got the desire? I certainly know it's not the data collectors that have the desire.
Anthony: I think it's a really interesting discussion though, right? Because when we talk about true financial currency, there’s multiple layers of trust built into the system when we talk about data, where everybody now is almost trope to say data is just a new form of currency.
Yet there isn't implicit trust that is clear to the user, the receiver, the person that's got the value here in that value chain. So, it's really, I think, difficult for the average company, for the average individual to work out what are the mechanisms for trust that needs to occur across these datasets and then the utility of that.
I just, I really love that labeling thing that's there, but obviously without someone putting in a mandate that that's required. It's pretty hard to operate that way in the wild west. What are your thoughts, though, as we're seeing this evolution occur where, much like money and other forms of currency there is this clarity of trust.
How we'll get to that clarity point? Because it seems to me to be a really long way away.
Paul: Yeah. It's difficult to say, but I think it comes down to again, really from a company perspective, understanding who your customers really are. I think a lot of companies have become accustomed to the idea that the market is the real customer, right?
Or private equity is the customer, but really at the end of the day, you need to think about your company as a value creation machine for the people that actually put money into it. And so, you think about: customer has a demand, right? I have certain expectations. I need a thing from you. The demand goes into this big machine step process, picks it up.
There are people associated with it. It takes time. It takes resources. Something comes out of that into another process and so forth on and on then finally value pops out and then the customer accepts it, and they clap like a sea lion and everyone's happy. Okay. So, understanding that value chain and understanding what your customers are actually after, I think is really where the trust is going to happen.
So, I think, you know, you brought up David Soloff, actually, Kris, I think you're the one that brought up the Soloff paper. One of the things that he argues in another paper is the concept of the perception of harm is harm. If I think that you're watching me, it's going to affect the way I behave.
There are certain things I won't do. There are certain things that I'm going to do because I think that's what's expected. I'm going to feel uncomfortable. I'm going to feel distrustful of you, right? This is what's going to break the trust between the customer and the, and the company in this point. If I'm feeling those things in the context of your products and services, chances are really good I'm not going to be your customer much longer than I absolutely have to. You can rely on customer lock-in for a while, right up until a point at which a disruptor enters your market, and then you have a problem. But again, you really need to think about what are you doing for the customer, and what are their expectations?
Because the real pressure is not, regulations are fine as far as they go, but the real pressure is going to come from the customers.
Anthony: That's super interesting. So, you think it's the market pressure that's going to drive the behaviors there. How does that then work though, when you now have a situation where the trust holders are, you know, if you look at the case of Facebook and Meta, effectively being able to broadcast and change trust relationships that didn't exist through what is misinformation in those processes.
So, it's really, I think, an interesting thing of like, how do you understand provenance, which is really where I'm getting to? How do you know what the data is? And it is what it purports to be. And coming back to that notion of currency and being able to hold it up to the light and look through it. Do you have thoughts about how we should approach that from the perspective, you know, again, if we're holding our own data, but it's being used by these people, we own our own data, rather, it's being used by these people out there and they're just custodians of it.
But yet we have no control over that custodianship. It's a really interesting paradigm.
Paul: It is. And I think as a data subject, if we can be reductive, as an individual person who's interacting with this stuff, yeah. Again, it comes back to value exchange. So you think about meta, you think about Facebook, they have, they're doing what they're doing, they're collecting information, they're using that for targeted advertising and those sorts of things, but you're starting to see, I haven't used Facebook in years, but from what I understand, the experience is becoming somewhat degraded, right? You're much more likely to see things like AI images and things that are designed to, cause negative engagement and all sorts of things. I also understand that their user participation is dropping. The demographic seems to be getting narrower. I know that, well, I don't know, but I have read that younger generations are much less likely to use Facebook.
So, I can't prove this. I can't point you to a paper or study that has plenty of charts and nice data and so forth. But I think what's happened is you have an organization, you have a company that produced a product. Originally, you could go to Facebook, you could set up, you could see what your family was doing and all of these other kinds of things, right?
It was, it was a great way to connect with, you know, your old high school friends and all these sorts of things, but it has gradually degraded to the point that. It benefits them more than it benefits you, right? So, the value is going in the wrong direction. And the result of that is people are not using the service as much.
Again, I think that's market pressure. You can set up regulations, and I think regulations are certainly fine, but at the end of the day, it's going to be market pressure that really changes the behavior of these organizations, and I think we're seeing that play out.
Anthony: So then let's loop that to AI, and generative AI. What's your guess and I get there's no papers out there but there is some research starting to be formed that generative AI doesn't actually fall into the same trap that it almost loops back on itself and therefore the market's actually not going to continue to find a utility out of it.
I mean, we're really seeing OpenAI struggle to put out GPT 5. I don't know what's actually going on there, but they're clearly some friction and delays. Is that going to be a part of the framing of the conversation? And it's going to be a little bit like the Facebook experience?
Paul: So, I think there are a couple of different faces there or a couple of different aspects of that.
So, one of the problems with AI, of course, is that it is shatteringly expensive. OpenAI's, you know, they don't share their financials with me, but I have heard that their burn rate is substantial.
Anthony: Well, they seem to be raising very large amounts of cash.
Paul: Well, they do. Well, they do. And that's an important aspect of it, right?
So, until the bubble finally pops, let's call it what it is. At this point,
Anthony: Did someone say tulips?
Paul: It might very well have done. Yeah. Might very well. I unfortunately am old enough to see a couple of tech bubbles pop. So, I think I know one, when I see one, but the fact is you have an enormous amount of capital that's going into this and some value is coming out of it, there's no question of that.
It's not, you know, I can remember I was on the, I was on a flight years ago. And there was a magazine and there was an article on there, it was, I can't remember what magazine it was, it doesn't matter anyway, but it was, someone was bloviating around how blockchain was going to fix IT. Now, the problem with that, of course, is this person didn't really have a way of articulating what was broken about IT at the time and clearly didn't understand what blockchain was about. And we're seeing some things like that with AI as well. You know, there was a breathless article not too long ago about how someone had used AI to discern Scottish from American whiskey. And all it was doing was reading a spectrograph, right?
Well, you don't need AI for that. And probably they were not using AI, right? But at this point in time, if you're a tech startup, you want to sell some new technology, you want to launch a new company and you need some capital for that, it's going to be about AI. If I'm going to start selling pens, they've got to be AI pens in some way.
Eventually, that's going to break. And we're going to see a lot of retraction in the market. We're going to see a number of these companies either die or be acquired for parts and so forth. And so, I think what that means in terms of your relationship between this technology and other, market-driven effects like potential reduction of usage of Facebook and that kind of stuff.
What's probably going to happen is you will see either a sharp reduction in investments or development of these larger foundational models. You may see more focus in the development of smaller models that can do very focused kinds of things and run, you know, in the smaller overhead, that manner of thing.
And I think we're also going to see a lot of investments currently in things like trying to replace customer support staff with chatbots. I think we're going to see a lot of that get pulled back. Is AI going to go away because it's expensive and it makes everybody upset? No, absolutely not. But, you know, we are where we are in the hype, in the hype cycle.
I think that the plateau of disillusionment is coming pretty soon. That doesn't fix anything. Let me be really clear. So, all of your Reddit stuff is still baked into the foundational model. So, there's still fixing that. But it might be that, the eternal optimist in me would like to think that further harm on that may be attenuated somewhat just because of the costs involved.
Anthony: Fantastic. I could go on and ask questions for forever. And it has been a great conversation, Paul. There's so many more things to unpack. I'd love if you have time to come on and we'd even do another podcast later in the year. But thank you very much for making the time. Thank you for really diving in and bearing with some of our questions.
It's been absolutely fascinating I think the reason we probably want to unpack it again later in the year is so much is going to change in six months.
Paul: Absolutely. Absolutely. Well, it has been a pleasure. I appreciate the opportunity.
Anthony: No, thank you. And thanks all for listening. I'm Anthony Woodward.
Kris: And I'm Kris Brown. And we'll see you next time on FILED.