Episode 29: From Gut Feeling to Jury Science: How Big Data Is Changing Trial Strategy

Episode 29: From Gut Feeling to Jury Science: How Big Data Is Changing Trial Strategy

The following is a transcript of Episode 29 of  Championing Justice. You can listen to the full episode here, or watch it on YouTube.


Darl:

Welcome to the Championing Justice Podcast. My name is Darl Champion. I am a trial lawyer just outside of Atlanta, Georgia. My law firm, The Champion Firm, handles cases throughout the state and some other states in the Southeast.

Today we have John Campbell on. I'm really excited to talk to him about the work he does. John is a school teacher, turned trial lawyer, turned law professor, turned jury researcher, who uses scientific methods and big data to study juries. And I myself have used John on a really big case and it helped me leverage a fantastic settlement. So happy to have John on with us.

Welcome, John.

John:

Hey, thanks. First of all, Darl, congratulations on the podcast. And second of all, congratulations on having the coolest last name in law.

Darl:

Thank you. Thank you. Although I did see there's a law firm called Moore Payne.

John:

I just was on the call with Spencer Payne, not more than 30 minutes ago. And every time I talk with him, I'm like, "Dude, seriously, Moore Payne is just so good."

Darl:

Yeah. Great name for a law firm. But you're in Madrid. So you got to tell me about that before we talk about the work you do.

John:

Yeah, I'll give you the really short version. So Alicia, my wife, who's also a trial lawyer and trial scientist, who's actually at a trial in North Carolina right now. We studied in Madrid 20 plus years ago when we were in law school and we needed summer credits and we came to Madrid. And we fell in love with the city and we always said, "Someday, someday, someday we'll get back."

And we got back a couple times for a few days crossing through or coming through Europe. Then fast forward a lot. I'm a law professor and they tell me, "You know every seven years you have a sabbatical year," to which I was like, "What are you talking about? "

And that's when I learned that being a law professor was kind of like a loophole in life. And they would give you a year and pay you part of your salary to not work and rest from the law professor workload.

So I said, "Yes, I will do that." I still had an active caseload too. And so we figured out some coverage for those. Alicia had lots of cases, kind of planned on flying back as she needed to. And we said, "Let's go for a year to Madrid. Let's take the kids who were seven and eight at the time, and let's throw them in a public Spanish school. Let's see how much Spanish they can learn. We'll have Europe as our backyard."

COVID hit. They weren't going to get to finish their year, but they were happy up until then. So we had to decide, are we going to rush home or are we going to stay? And we stayed. So we did the pandemic in Madrid. There was nothing to rush back to. Everything was sort of closed in the US and we said, "Well, let's stay a little longer." And that second year, we thought this is working.

We love it. Let's stay. And so we bought a place. Seven years later, we still live in Madrid.

Darl:

That's fantastic. Wow. Well, congratulations on that. And the whole law professor thing is awesome.

My dad is a college professor, retired now, semi-retired. I don't think professors ever retire. They always got to teach one class. But I sort of grew up in that environment as a kid where it's like, wait, you get all summer off. You get a month for Christmas, spring break, fall break. And I actually thought that that may be something that I was interested in.

But that's fantastic, John. So tell us about your work and what you do, and then we'll kind of dive into big data and what it is and how people can use it.

John:

Sure. So it's a great segue. We didn't plan it, but the reason I got into studying juries with big data was because I was a law professor.

So I'd been a trial lawyer. I had tried cases, injury cases, and I'd done a lot of class action. Class actions don't get tried. Class action, multi-day hearings. Because I was working in consumer law and class actions, a lot of things went up on appeal. I ended up arguing lots of Missouri State Supreme Court appeals and some federal appeals. And that was kind of my life.

And then I became a law professor and I wanted to publish, but I wanted to publish about things I knew. And I really didn't want to publish about con law or something. I figured I wasn't smart enough for that. But I knew a lot about juries.

And as I started looking at jury studies, there was this emerging idea that instead of using your grad students or your law students or people off Craigslist, you could use these really big online samples because not too many years before it had started to become possible for lots of reasons to find people who wanted to do sort of odd job work or they were bored and do things online.

Nobody was really using that much for social science. And then a few social scientists and economists and a few law professors started saying, "What if we could get people online?" Maybe that'd be better in some ways. We don't have to bring them in. They don't guess who we are and why we're studying them. We can manipulate things by just changing what they see on a screen, all this. We can pay them for the time we use, but not the commute, all that.

So I started with the academic world working with some other great people like Valerie Hans at Cornell and Jessica Salerno at ASU, just publishing about jury behavior and learning from them. The trade-off was a little bit, "I learn from you the methods and I think I can share some really interesting real life questions that trial lawyers will care about. " And that was the relationship.

I started to learn that, but as you can imagine, if you're still in the trial world, I was talking with Alicia, my wife, who was trying cases and John Simon, who taught me to be a lawyer, who's an inner circle member, wonderful lawyer. And we all started talking about, well, couldn't this be done on private cases? What if we used academic methods, but we studied individual cases? Couldn't we learn a lot?

We did it on our own cases, trying to build a better mousetrap for ourselves. We had good success. And then I got kind of the breakthrough was I got an invitation from some friends, including John Simon, to go speak to the inner circle about what we were doing. And all of a sudden the call started coming in and Alicia and I realized pretty quickly we're onto something, but we weren't looking to form a business.

And so we had to kind of systematize it. And fast forward, we've studied almost this, I think next month we'll hit 1,500 civil cases and 450,000 jurors. And just yesterday I recorded another verdict and I think it's like $5.7 billion in verdicts. And so it's blown up, it's kept us busy and we keep learning.

Darl:

So I got a ton of questions because again, as you know, I'm fascinated by all this and I've worked with you in the past on a case.

How did you get this? How did you go from the sort of traditional focus group model and why move away from that with the, "Hey, let's bring in 18, 24 people and talk about the case for a day to big data." And did you have experience with those small focus groups first?

John:

Yeah, I did. Yeah, I did both as even when I was a baby lawyer or before I was a lawyer, I saw my firm doing them and then I started doing them on my own case because I wanted to learn and Alicia did them on hers.

But as you know, I mean, anybody who's ever done it, the first thing is like even if you bring in 18 people, I think usually you're actually learning from five because there's always those five who talk a lot. And then there's that person who said like nine words in four hours.

So I mean, it's funny because 18 often isn't really like we're getting really full fleshed out opinions from 18. The other was, as I started doing the academic stuff, what I realized was if you tried to publish in a peer reviewed journal, your conclusions about 18 people's opinions, you would be laughed out, right? They would say like, "Hey, that's great, but how do we know the next 18 will be the same?" And your sample's so small that you only had three people with a college education.

How do you know these findings hold for those folks? How do you know that it holds across races and age and income and all kinds of things? And so the solution to that was always bigger samples so that you have less sort of noise in the data. And the way I always explain it is like, look, if you flip a coin 10 times, you could get heads twice.

But if you estimated the likelihood of heads at 20%, you'd be way, way off. But if you flip a coin a hundred times, you might get 52 or 48 or 53 or 51 or even 50, but you'll be very close to the actual number. And the only difference is, is that you got rid of the small numbers problem, right? You did it more times.

So as we started recognizing that in the academy, it occurred to me that the same sorts of things were true for focus groups. And it's not to say you can't learn from focus groups, but the wisdom of focus groups was always, don't trust the numbers, use this to get themes, but you're not going to be able to predict value. It probably doesn't even tell you if you win or lose, but you'll at least learn some stuff. Bigger samples started to solve those problems.

And then they did some, and I'm sorry for this long answer, but they did some unexpected things too. They let us manipulate what people saw. And I don't mean that in a negative way. I mean that in like the way you talk about experiments, we could let some people see a piece of evidence and the whole case and some other people see the exact same case, but without the piece of evidence.

And if you had big samples, you could then see if that changed the outcome and ascribe causation. You could say, "The only thing I changed was this. I have a big sample and that there was a meaningful change in the numbers. This thing matters and here's how it matters." Which you could never do with small samples.

There were efficiencies in time because you could write this up, add video and images to show people online, and then you could do that one time and use it as many times as you wanted. There were efficiencies in blinding it because you didn't call them to your office or somewhere, so they didn't know who was sponsoring it. And so there were all these sort of people were more open when there weren't people listening to them, so they would enter more information and you'd learn more from them.

And so as we kept doing this, we just felt like this is giving us some stuff that we can't get from small groups. Not that small groups can't teach us things, but there were some things big data could do that small groups didn't.

Darl:

Before I ask you about kind of the process and how somebody goes about, whether it's contacting you or somebody else running a big data focus group and just what that looks like, do the small focus groups still have a role either by themselves or in kind of combination with the big data groups?

John:

Yeah. I think absolutely look, by themselves they have some. There's no doubt that if you bring people in, present your case, sometimes they will surface something you didn't think of because you were in your lawyer bubble. That's always good.

So I think there's value there as long as it's sort of taken with the right amount of caution that what they surfaced may or may not be a common theme or it could have been something very idiosyncratic to that group.

What we use them for more and what my favorite combination is, and we actually end up writing kind of a whole book about this JuryBall, which is when you have all this big data, you still need to implement it, which involves things like, all right, now I know what questions will help me find good and bad jurors for my case, but how do I ask those in a way that gets meaningful responses and how do I deal with the answers I get?

Now I know that I should focus more on defendant A's behavior and I thought it was defendant B, but now I need to practice doing that for my opening statement and how I talk to witnesses. And you really can only practice doing that by practicing and you need to hear from real people, are you picking up what I'm putting down?

So we often now will do big data studies and then multiple workdays where the lawyers work through that data to understand it, to think about how they should use it and to practice implementing it. And when we've done that, we do that primarily with Sean Claggett's team. When we do that, but we've done that with Jessica Brilo and others who are in a more traditional jury consultant role, that's when we see the best verdicts, right? Is they don't just have the data, but they're working in small sessions and with real people giving them feedback about how they're doing to implement what we know is most likely to work and then making sure they're doing it.

So we've moved from small groups being what tell us how the case should work to more like small groups telling us if we're doing a good job, but big data being our guide.

Darl:

So it sounds like the focus groups are great for communication. Am I communicating the theme and the message properly or are they sort of having this glazed overlook and nobody's picking it up?

And I will say this, and I've worked with Jessica too, she's fantastic. I think I still do the small focus groups. I think they have a role. I'm always very concerned about, is there like one domineering personality in the focus group who's kind of getting everybody to kind of go along with them that might be tainting the feedback I'm getting?

And I've always felt like a bad focus group is worse than no focus group. I mean, because it gives you wrong information and it can kind of lead you astray from, "Hey, these are my instincts as a lawyer with experience, this is kind of the way I want to present it."

And then you do like one focus group with one bad person who kind of controls the conversation, leads everything a certain way, and then you change everything based on that and it's not accurate.

John:

I 100% agree. I had somebody really early when I was doing this say to me, "I know that we need Hispanic women." They were very specific. And I said like, "What do you mean? Why?" And they said, "Well, every Hispanic woman was for us in the focus group." And I was like, "Well, how many were these people in your focus group?" And they said three.

And I thought, "This is terrifying. This is like a case worth millions of dollars and we're making decisions on a few people. " And yeah, I think you're right that you have somebody who's really loud and they're really negative and they keep hammering a theme. Human nature is we're going to hear that and focus on it, but it may be that really that person would happen one out of a hundred people.

And really most times what's going to happen is they'll surface a stupid idea, but in a genuine jury deliberation when people have been there for a week or two or three and they have some trust with one another, that person would be talked off the ledge.

But in a focus group where they don't know each other and that person dominates, the dynamics are different and we can leave scared of an idea that probably is fringe.

Darl:

So you're a lawyer. Are you coming into these cases as, "Hey, I'm going to be sort of co-counsel with you. We're going to divide the fee accordingly and this is kind of my area of expertise and I'm going to run the big data and all this other stuff?"

Or is it kind of an a la carte, like you just pay a flat fee for your services or do you do both?

John:

So I strongly prefer what the model, the contingent model, the we're in the boat and row together model. And that is often what we do because like, right, we're lawyers. Some lawyers specialty are, I'm really good at depos, I'm really good at trial, I'm a great appellate lawyer.

Our specialty is we've seen a lot of cases, we can get really good information about what people are most likely to do and make the case better. So that's my favorite. I love that not only because of course there's more upside, but also because if I say to somebody, "I want to study it again," they never worry that like I'm selling them something because I'm spending my money, right? I want to study it again because I want it to be perfect.

I'll give you a very concrete example. My wife Alicia's at a case right now in North Carolina.

Darl:

Where in North Carolina? I'm curious.

John:

In Hendersonville. In Hendersonville. And I just got off the phone with her before she was heading into court. We have studied that case eight times with over a thousand jurors now. We're happy to do it because it's worth doing to figure out the best way to try the case.

And the great thing is nobody of the attorneys involved, nobody said, "Are you sure you want to run another study because they're going to pay for it? " We wanted to run another study because we thought it was right.

And that lets us do things in that model like study the openings because they were on CBN and turn around what people thought of those openings in 24 hours and have that information or study the case after the first week, use the dailies to put together presentations and see where we're at. So we love that.

But we have people who call us and say, "Look, I've been trying a lot of cases. I really just need the information. I'll go try it myself. I'm confident I can implement it. " They're often right. And so they pay for one study. We do one big study, they take that information, we have some calls and they go put it in play.

We do both. For many, many reasons, our work has moved towards contingency over the years, and I think it's the model that is probably most likely to continue and keep growing over the next years, but we do both.

Darl:

The costs. Tell us, because that's where I want to talk to about this other option you have called FRED, which is more appropriate for cases that may not be as big. Tell me about the cost if you just did kind of an a la carte flat fee big data focus group.

John:

Yeah. Big data studies have a kind of a range because we run studies as small as, I say small, but as small as 275 people. But as we add variables, as we test different damage numbers, as we check the case with and without that expert who might get excluded, whatever, we add jurors so we continue to have statistical certainty.

So we also send out reports sometimes with 700 jurors. I think the biggest we've ever done is 850. So for an individual case in one study. In certain cases, we've certainly had more than that when we studied it multiple times. Those run between 30 and $40,000. That's kind of the going rate for those studies.

And so yeah, a case has to have sort of a critical mass to make a big data study make sense, even if you're paying a la carte. Flip side is, if somebody says, "I want to go contingent," and what they've got is a $500,000 case, that doesn't make sense for me either.

And so smaller cases until very recently, which is kind of what you're asking about and we can talk about is we didn't have a great solution for early in cases where it doesn't make sense to do big data or cases that were simply smaller due to coverage or injury or whatever.

Darl:

So tell us what a big data focus group looks like. And then we'll talk about FRED because I'm really interested in FRED. What's kind of the process? And maybe we can just walk through just kind of hypothetical.

So let's say I got a big case and I think it's an eight figure case and I say, "Man, I'm struggling with case value. I've got a good offer from the defense. I think case might be worth five times this, but I don't want to turn down this good offer. I want to talk to John and get some feedback on this."

So walk me through what that process looks like, what you're going to need from me and how we're going to run that focus group together.

John:

Yeah. You call. We talk about the case. I like to get oriented. So it's good to know what are you worrying about? What are you thinking about? What do you like? What's this case about? It's usually an hour call, not five. I mean, it's just, let's get oriented to this case a little bit. I send you some guidance on kind of what we need to get started, which I always say kind of looks like mediation statements, but for jurors.

Those mediation statements that you send me as first draft forms have a plaintiff mediation statement and a defense mediation statement, right? And so they lay out the plaintiff case. We can include in those any demonstratives, exhibits, documents, snippets, video clips, animations, body cam, dash cam, whatever. The defense is the same.

And so the hard work sometimes is making the defense robust, fair, complete. We pass those back and forth, you and I until you say this is clear and complete and concise, and I agree that we have been fair to both sides and jurors can't tell who put this together.

That's the gold standard is good, fair presentations that don't tip off who's doing the work.That's critical. When those are great, we build a study and survey software that does everything from get agreement to confidentiality and non-disclosure, to let people work through the cases that overtly and in ways that people can't detect is making sure people are paying attention, are real people, are being honest, all that.

And then they do things like jurors. They vote on liability, on fault, on damages. Then we can route them because it's a computer. We can say, if they voted for the plaintiff, show them some screens about the plaintiff's arguments and have them rate the top two that motivated them. Let's give them the list of critical evidence in the case. Tell us if it helped the plaintiff, the defense or neutral. Let's give them a chance to say in their own words why they did what they did.

And then we get this massive data. We have an amazing team run by a professor at ASU and two professors that were recently recruited and are now working at Cornell who run the data, who makes sense of it, and who produce the sort of charts and graphs and things that help us visualize it.

We've worked together on that. We have a good system now to make sure it's reliable data cleaned and presented in a way that can be read. Alicia and I or our team look at it, write it up and send you a report. And so in really big steps, it's talk about the case, presentation ready, build study, report out.

Darl:

The jurors, where are you finding them? And I don't need to get into proprietary stuff, but Hendersonville, it's in Western North Carolina, somewhat rural venue, south of Asheville, I believe.

Are y'all looking for people that are fitting that demographic? Y'all do a demographic profile of the venue and then you try to fill it, fill the focus group with those folks?

John:

Yes. Yes. Although it's a little venue specific. So look, if you tell me you're in Dallas where I'm from, we can get a lot of folks from Dallas, first of all.

And so we will. And then we can get a lot of folks from around Dallas and in Texas and they have some commonalities. And then we can get people like them too because there are people like folks in Dallas.

Darl:

I mean, the traditional knock on the traditional focus group has been these folks have time and as normally they have time because they're unemployed.

The big data focus group, is it similar to that or are y'all getting every people that are professionals, white collar workers, down to blue collar workers?

John:

I wouldn't have known this. I mean, I only learned this kind of through time, but yeah, we actually get a much broader spectrum than the folks who can volunteer to show up for five hours on a Saturday off Craigslist.

So for example, when we look and we do analysis pretty periodically about who are we getting? If we throw a nationwide sample, like let's just assume we throw it nationwide, we get something that looks very much like the country. So we get a political break that looks a lot like the census.

We get an age range that looks a lot like the census. We get an income range that looks like the census except that at about $300,000, you're going to see a drop. We're not really going to get people who earn more than that. That's a group that's hard to sample. And in many, many ways, what we get looks like the country.

Similarly, if we target any state, it looks a lot like the state. We know why. Luckily, social scientists dug into this because one of the first questions they had was, who are these people and are they like real people?

Well, the great thing is, is that when you're asking people to do an hour or two online, and we try to keep these studies to a couple hours, the number one reason people report for doing them is because they think it's fun

Because they're not doing it to get rich. You're not going to get rich doing a two hour study online. But if your choice is between watching NCIS or doing Sudoku or doing this, some people do this. So we see, for example, in our samples, 80 year old people who are taking studies because they figured out how to get online, they got some free time, they're interested.

We see people who are grad students who think they'll make a little extra money, but it's interesting. And we see people who just think, I'm sitting at home, I'd rather go online and do ... When they're going online, they're not looking for a jury study necessarily, but they're looking at, I'll read the first chapter of a book and give feedback. I'll look at haircuts and say which ones I like. I'll look at a Coca-Cola ad and tell them what I think. I'll edit a short letter and give my suggestions or they might find a jury study.

And so we're actually getting a pretty great spread of people and with a little fine tuning and a little tweaking, you can get any kind of spread you need so that if you're in Hendersonville, North Carolina and you want a more rural sample, folks who hunt more, folks who own more guns, folks who might skew slightly conservative and income level that for the most part is going to be blue collar, you can get it.

Darl:

Gotcha. So let's say I got a case and I'm like, "Man, I can't fork out 30, 40 grand on the case as a case expense." And also, it doesn't really make sense to bring you in on it.

And you'd look at it too and say, "Hey, Darl, this is maybe a low seven figure case," which isn't ... I love seven figure cases, don't get me wrong, but they don't justify maybe all of that. What do I do? What are my options? How can I still use the benefits of big data without forking over that much money?

John:

Man, I'm glad you asked, and I can tell you that until a couple years ago, I would've kind of stared at you blankly, but luckily my wife is smarter than me. And so Alicia, years ago, more than two years ago was saying, "You know what? We can build something that people could get at least some of the benefits of data, but we could get the cost down."

And it took a little while for us to have the time and energy and the right connections in the software world and for everything to kind of come along that we could do it. But what we have now is what's called Focus with FRED. It's literally the website is www.focuswithfred.com, right? That's it.

And we call them FRED. We refer to FRED as a person. And so we have FRED. And what you do is FRED is kind of big data's little brother. It's boiled down. So everything we've learned in the last 10 or 11 years about how to collect data, how to clean it, what we need in samples to learn things we've taken and kind of shrunk down.

And so if you were to log on now to that site, what you'd find is you can upload a shorter presentation of the plaintiff and defense case, just like we talked about in big data, but now they're shorter. You still can include a little bit of video up to five minutes per side. You can still build in images and then you're going to be routed to a screen that we've worked pretty hard on to make do it yourself.

And you're going to say, for example, let's assume that case you had is a motor vehicle collision case and it's only got a million dollars in coverage. You say, "I have one plaintiff against one defendant. It's a motor vehicle crash." You fill in a couple of basic things about who the plaintiff and the defendant are. It's going to fill out basic jury instructions.

It's going to make a simple overview that says that so- and-so is suing so- and-so for a motor vehicle crash. You're going to upload a plaintiff and defense case that are reasonably short, but are certainly long enough that you can lay out what happened and the injuries.

You're going to provide your one single economic request and your one sequel non-economic request. You're going to put those in. And then you're going to go through a few clicks and say, "I want them to tell me what they think of these 10 pieces of evidence and I want to ask the jurors these four open-ended questions about what'd you think of my plaintiff? How bad do you think that crash looks? Do you believe this is the TBIs from this or do you think it was preexisting?" Whatever.

FRED is going to go out and largely automate it now, build that study, find those jurors, real people, just like we get them for big data. We're going to pay them less because we need them for less time. It's going to go get 75 people. It's going to run them through the study and then it's going to produce a mini report that tells you your range of win rate because we can only be so certain.

So it's within a 10% range, your range of damages, if there's fault, the range of fault, how people are viewing your key pieces of evidence, how credible they think your case is, how they view severity of injury, and then the answers to those open-ended questions.

So you're going to get a mini report. That mini report, if you do it all yourselves, it's going to cost $7,500. If you ask an attorney on our team to review it once, give you a little feedback and help you build it, it's going to cost 9,750. And then you're going to have this report in three to five business days.That's FRED in a nutshell.

Darl:

Gotcha. Yeah. And I pulled up the website and I was looking at it. The jurors are real people, but y'all use AI to help kind of...

John:

Yeah. So to make the system lean, we had to figure out how to automate almost everything. And so not to get too nerdy, but that means we moved from a build team builds a big complex big data study that could have three plaintiffs and four defendants, but we want to manipulate two in and out and all this too. Great. We have one or two plaintiffs, one or two defendants. We got a claim or two against each. We can build all that in a grid. It'll auto-build the study.

The real people take the real study, but AI, for example, is being used to check their attention, make sure their answers are real, make sure they're real people. Silly things like what if somebody enters a million is what they mean to enter, but they add an extra zero because they're in a hurry? Well, that's a 10X error, right? Instead of a million, they enter 10 million.

So in all studies, we have to ask, how much did you mean to enter in words? And so they put in a million and then they type one million. If they put in 10 million in numbers and one million in words, we know there's something wrong.

Well, in FRED, we have AI that's checking all that so that a human isn't auto checking that, which saves time. When we get feedback from jurors, some are shorter, some are longer. The AI is combing all that feedback from real people and distilling it into takeaway themes. And so between software and AI, we've built something that can run quickly without as much human touch, which lets us drive the cost down.

Darl:

Right, right. Well, do y'all still use real names in the case and stuff like that? Or do you ask people to anonymize it or does it get anonymized by AI when you upload it?

John:

So we ask in both versions, in small data and big data, let's call it, and FRED in the big data world. We ask people to anonymize when possible, and we like to talk through that, especially in big data.

So you can almost always anonymize the plaintiff, and I think it's always a good idea. The harder question is should you anonymize the defense? Because if you're suing Amazon and you call it, I don't know, the jungle, you're not going to get the same damages because people know who Amazon is and they know how wealthy it is.

So it's less possible sometimes to anonymize defendants because you could actually change results. It's usually possible to anonymize plaintiffs. If that's done, it's done by the attorneys. It's not done automatically by AI because we see that as a decision that takes some human decision making.

Darl:

So let's talk about AI. AI jurors, I've seen some focus group companies that are kind of marketing that you're going to have AI jurors give you feedback. And what is that?

John:

Junk. It's junk. Sorry. I mean, I know some people doing it and I think they do other things well, but AI was not designed—large language models were not designed—to simulate people and predict how they'll vote on unique individual cases. There's a lot of research about this.

So everybody hoped that this could replace people for, for example, political polling. But like when they did a study where they tried to have AI people, fake people, say whether they approved of Trump or not at this point, the AI missed badly on something that's in the public domain, there's a lot of AI training on Trump approval polls and Trump articles, but it missed badly, and it especially missed badly in minority groups, which is troubling.

So then fast forward to sort of doing it. And by the way, sometimes when people hear me say this, like I had a friend say, he's like, "Yeah, but John, you don't want it to work because it would hurt your business model." I just want to be clear, that's not true.

My single largest cost in everything I do is paying jurors. If I could replace them all with AI for 20 bucks a month, man, oh man, I could lower prices and still make more money. And the day that's true, that'd be great.

But right now, what we've seen is there's a great paper out of Cornell that talks about the utility, or I think it's called the disutility of AI jurors. And they looked at all the large language models and tried predicting ... And because the first test is, can they replicate a human sample? If you show real people a case, does AI get close to what real people said?

Not even what will a jury do, just does it replicate what real people do when they see the same thing? The AI missed badly on liability and damages. Sometimes it would say 20% win rate and the real win rate was 90, for example. I mean, badly, nobody would guess that bad.

And in the funniest example, they asked the AI, you estimated the case value at I think $250 million. Think harder because you could do this sort of use a deeper thought process LLM. And it came back and said, "I have revised my prediction. I think the case is worth two million." So either two or 250. So then we did a study of one of the very popular sites where we had 400 people that had looked at the case.

The attorney had run it through this AI site as well and we said, "Do you mind sharing the results?" Our data said the case had a 97%, meaning 97 out of every 100 people vote for liability, 97% win rate. This legal AI site with 500 AI jurors predicted the win rate at 62%. That's a 35% difference. And you can imagine if you're a lawyer, if somebody says 97 out of 100 people vote for your case, you're not worried about liability anymore.

If somebody says 62 out of a hundred vote for your case, you got a coin toss case, right? That's a hell of a deliberation.

Darl:

You got a coin toss case and you're also worried about the jurors holding the verdict down that might vote for liability just to go home.

John:

Absolutely. And so then it also predicted value. It lowered the value from our data said 25 million, the AI said 16.

But imagine, Darl, you'll know this right away. If you told me I got a 62% win rate on a 16 million dollar case, I need to start thinking about settling that case for seven or $8 million because it's a coin toss on winning. The verdict's going to get held down and four out of 10 times at least, maybe five out of five with deliberation, I lose it.

Now, our data said 97% win rate, $25 million. Well, that is a case you say, unless they pay me full value, I'm going to try it because I have very little risk. The crazy thing is this case went to trial. They got a $25 million verdict and won the case.

And I shudder to think what would've happened if they had relied upon fake people who are created by AI who attempt to predict what real people will do. So my simple view is this, when AI decides jury trials, we should use AI to predict it, but until then, let's use real people.

Darl:

Right. And hopefully we never will have a time when AI is deciding...

John:

No, I mean, I think that'd be a terrible idea. And frankly, I think it's weird because you and I started talking about small samples. We used to use, sorry, we used to use 12, 18 people to learn about cases.

We still use it, but maybe we use big samples too. AI takes that. AI, when you use AI people, which I struggle to call them people, I mean, when you use these fake AI entities, you've taken your actual human sample to zero. I don't know how that's a good idea.

Darl:

Right. Yeah, no, absolutely.

There's a funny guy on Instagram. I can't remember his name, but he makes these funny videos of himself talking to AI and it gets everything wrong. He's got one where he's like, "How many Rs are in Strawberry?" And it's like two. And he's like, "Well, let's spell it. " And they spell it and he's like, "You just said three Rs." He's like, "Nope, just said two." He's got some really funny ones too.

He's got one where he's wearing a baby's hat and he's like, "Give me fashion advice. What do you think of my tiny hat?" It's like you look great. But it's kind of funny watching how AI gets so much stuff wrong, but then we're also told all these great things about AI. And look, I could talk about AI all day long. I mean, I think AI is a great assistant.

I think it's great to help systematize certain things. I can give you one way that I use it. I use it to draft mediation statements that I want to send to the mediator. I upload ones that I myself have drafted in the past. I think Claude's a great tool for that. I upload some of the facts of the case, the deposition, and you create a great prompt. You can get something that's 80% done and really makes you efficient.

Where I have a massive problem with AI is outsourcing everything to it and outsourcing the decision making and the case to it. And that could happen with focus groups, but I see it ... Again, I'm not trying to knock any companies out there, but there's a lot of these AI demand drafting companies. They get so much stuff wrong and they say, "Oh, well, it frees you up to do more of your own thing and then you can work on more higher level stuff."

The problem is human nature is to be lazy, to do the least amount of work. And so when you get something that purports to be a finished product, you just rely on it. And we see it with all these fake briefs that are getting filed with hallucinated cases. I remember the first time I saw it happen, I said, "Oh, well, maybe these lawyers, maybe it wasn't publicly known that this happened." So this will probably be just a one-time thing.

It has exploded in the extent of the problem. And the psychology of that is fascinating to me.

John:

And it's not happening just in a remote trial court. I mean, I just saw it, I think, at either the 10th or Seventh Circuit, right?

Darl:

It happened at the Georgia Supreme Court, too, recently.

John:

Yeah. I mean, we're seeing sanctions at state Supreme Courts and Federal appellate courts sitting right below the US Supreme Court. It's pretty stunning.

Yeah, I haven't used the AI settlement drafting stuff, but I can imagine you need to be careful anytime AI writes for you because even when it doesn't hallucinate, it tends to embellish and fill in blanks. And then yeah, I mean, the thing that's always stunned me is I feel like when you move to AI can now simulate a person and their complex decision making, now we're out over our skis so far that the LLM creators didn't claim it could do that.

Can it simulate language, predict the next word, fill in the blank, summarize, invent interesting things? Sure. So when people ask me, "Should I use it to brainstorm?" Yeah, if you want to say, "I have this type of case and I want to blame the hospital and I want to focus on the hospital, not the doctor, give me 10 potential phrases." AI is great at that because it'll spit them out and then you're the final decision maker on which ones make sense.

But if anybody's ever tried that, you know that two or three of them are really stupid. A few of the examples it gives are terrible, but yeah, one of them might be good and you're the final decision maker.

Once we let it write stuff for us without review or pretend that 500 AI people or 500 real people, I think we've done more than at least now the tech can do. I don't know what the AI will do in 20 years, but I think it's pretty clear what it does right now.

Darl:

Yeah. There's a term, I've heard different terms for it, the error propagation, tolerance stacking is another thing, but the more you kind of stack error upon error upon error, like you're moving far afield. Imagine now that what's getting fed into the focus group program with the fake AI jurors is something created by AI that can't maybe pick up on nuance and make the judgment, right?

I mean, and that's kind of the thing. And I mean, I hate to say it. I mean, I think that so much of what we're seeing now in the personal injury world with the high volume, super high volume practices, we're just missing out on so much of the human story and the human connection and what happened in their life with the injury because you have to customize the approach to every case. I mean, I interviewed a guy about two years ago, had 500 cases, a year out of law school, 500 cases, John.

John:

Boy, that is terrifying.

Darl:

That's terrifying.

John:

I don't think you should have a 10th of that, I don't think, straight out of law school. A10th of that would probably scare me.

Darl:

Yeah. I mean, and it's like, well, there's soft tissue cardiac cases, case managers doing everything. And it's like, but still, you've also got to understand that sometimes those soft tissue card cases aren't cookie cutter, right?

If I've got a young guy who's 22 years old and they have an epidural steroid injection and the system flags, oh, they're better, treatment's done, less settled their case, well, what if their symptoms come back and they need epidural steroid injections every 18 to 24 months for the rest of their life or they need a back surgery, that's going to get missed.

But I am glad that you're using real jurors. So I applaud you for that. I think that's fantastic.

So John, before we go, one question that I love asking everybody is about any predictions for the future of legal, and that could be everything from something marketing or business related, how people are going to find lawyers in the future, the sort of consolidation we're seeing with massive firms, to how people work up cases, how people might integrate certain technologies into their practice and that sort of thing.

So you got any predictions that you've got in your mind that you think you have a cool, interesting take on something?

John:

Yeah. I mean, look, I think self-servingly, I think we'll continue to see, and I don't mean just our firm, we'll see the integration of science generally and scientific methods into understanding cases. We saw it as the art of trying cases. I think we're starting to see it as the art and science.

The art will never go away and matters, but we can put some science to it. So I think we are, we've seen a trend line that'll continue. I think the second thing is, is the infusion of litigation, finance and then investment in firms directly is only going to accelerate with all the pros and cons that come with that.

I think it's only a matter of time before we see the first plaintiff firm go public. I think that will happen. I think we will see that on the good side of that, the idea that we used to say that, well, the plaintiff's firm's sort of outnumbered by the defense, but we are more agile is going to change to, no, the plaintiff's firms are as big as the defense and have as many resources.

That's a good thing. But as you mentioned in the volume idea, the challenge is going to be in whether we can continue to serve each client individually and in the effort to systematize, regularize, productize, if we can still treat each person individually.

So I am not a doomsday viewer that the boutique, small, talented law firms going away, I don't think so at all. I think that there will be room for excellent lawyers to work small numbers of cases at a high level forever. That doesn't mean that there won't be consolidation among law firms, and I think there will be, but I'm a bit of an optimist.

I think in the end, what we'll get is really good systems for trying cases. We'll get some great firms who do some good things for the plaintiff's bar because they have the resources to fight back against the chamber, but we'll also continue to have these talented trial lawyers who have sets of cases and who are in very high demand.

Darl:

John, before we go, tell us where people can find you.

Obviously, focuswithfred.com is one place where people can reach out. I've actually got a case in mind that I want to use with that, and I'll let you know the results and my thoughts compared to it. And if it tries and we get a result, maybe we can come back and do another podcast to talk about how that worked out.

But aside from the Focus with FRED, if people want to bring you in on a big eight figure or nine figure case, how do they connect with you?

John:

Yeah. So because there's lots of Campbell's, because I have a very generic and not nearly as cool name as Champion, I'll say it slowly and carefully. So we are campbelllawc.com, Campbell Law. So you get three L's in a row, campbelllawllc.com. And I'm john@campbelllawllc.com. You'll find me there.

If you could also go to juryball.com, you'll see a little bit about what we do and how we implement it. And yeah, if you reach out to me, I'll get back to you and you can get in touch and we can talk about it.

I'm kind of like you, Darl. I'm a bit of a nerd. People always say, "I'm sorry I'm talking about my case." And I say, "Man, I love cases." I love hearing about cases. The weirder, more interesting the case, the better, the harder the riddle, the better. So I love that.

Darl:

Well, I'm going to use Focus with FRED. I've got one case of mine and I've got a case in mind for maybe a big data focus group. So I'll be in touch on that one too and enjoy Madrid.

Briefly, tell us what you got going on next week in Madrid with Jury Ball. Tell us what that is. Oh yeah. The podcast might appear after Jury Ball by the time it gets edited, but I'm sure there will be future events.

John:

Yeah. So the background is we started wanting to put together kind of what we were doing, kind of like what we're talking about today. So we wrote JuryBall along with Alicia and I, along with Sean Claggett, who was a really early adapter, adopter of data and was somebody who used it and trusted it and got some great results and is a wonderful trial lawyer who also then leveraged data. So we wrote the book.

Well, now we have conferences in Madrid every April and in Las Vegas every October where we bring together ... I love the conferences because we bring together attorneys who are using data in new and interesting ways, but we also bring in the best academic jury researchers who often write really interesting things that nobody sees.

So for example, we have Valerie Hans from Cornell, Jessica Salerno from Cornell, Nick Schweitzer from Cornell, and Sherry Diamond from Northwestern. Those four people are probably four of the top 10 jury researchers in the world. They're all going to be at JuryBall Madrid, except that Sherry Diamond, this is sending her some love. She fell and got hurt. She's not going to be able to make it. So Sherry, I hope you recover if you hear this.

But the group that shows up then is this mix of really great lawyers, many of whom ... Last year our anchor speaker was Nick Rowley. Wes Ball's coming this year who has nearly a billion dollar verdict. So these are amazing lawyers.

Charla Aldous from Dallas, great lawyers speaking about how they try cases and use data in all of their practice. Alicia, Sean and I talking about stuff we're learning as we study more people and academics talking about the latest and greatest. And we hold that in Madrid because Alicia and I live here and love it.

So we also are going, we're taking 125 people to the Real Madrid game, and that's going to be crazy. We're taking 80 people on Saturday lawyers to Segovia. And so we also infuse some Spanish culture and people leave honorary Madrileenos.

Darl:

Thanks, John, for joining and thank you for listening to this episode of the Championing Justice Podcast. Please follow us on Spotify, Apple Podcasts, and YouTube so you can see and hear our latest episodes when they go live.

Return to main podcast page