Hi, and welcome to The Impact on Fintech TV. I'm your host, Jeff Gitterman. I'm joined this morning by Lillian Freiberg, the head of North America for Clarity AI . She is the head of North America's for Clarity AI. And I have to say a good friend. I'm excited to be interviewing her this morning. Welcome to the show. Thank you for having me, Jeff. It's a pleasure So we have a tradition on The Impact. We always ask people up front, why are you on the side of good? Why are you doing work at Impact and not traditional finance or investment banking or something that might be more profitable? But give us some background story. So I can start with the easy part. I'm from Amherst, Massachusetts. So caring about the environment and people and the community and the world that we live in is sort of the oxygen that we breathe. But what drew me into what you would maybe call impact is this idea that traditional finance is just evolving. And I think finance and capital allocation have the ability to sort of make and shape the world. And so I want to be a part of that instead of just sort of screaming at things from the sidelines. And I want to make change from inside the systems that can do it. I agree with you. I've always been on the side of you can't change a system from the outside and throwing rocks at it. You change a system by becoming a leader within that system. And you certainly have become Alina, tell us a little bit about Clarity AI and what their focus is. Great. So Clarity AI, I think we are misconstrued as a data provider. We're a very AI-native tech-first intelligence layer fit in to augment any sort of investment workflow. So we work with clients of all sizes and strategies, and I think the products are built in a way to support pre-investment workflows, post-investment workflows, and sort of meet all the needs in sort of one area so that people are not scrambling to have 12 subscriptions and just to make better decisions off of the data as well. So if I'm at a company and I have a problem, What problem is Clarity AI solving for me? Or is it multilayered? Multilayered, for sure. I mean, there are certain clients that are looking for more risk mitigation. I think if you think of like institutional banks, they're looking for enterprise-level risk oversight. That's something that we can help with. Or if you're an activist hedge fund and you want to know about the holdings in your portfolio, you can look at sort of the controversy solutions that we have that really help you understand what's going on and how important and how material it is to your portfolio. So it really depends on the client type, but I would say it goes across asset classes for sure. So talk to me about like resilience, the big word and impact especially on Wall Street today is adaptation and resilience, AI and the impact of AI on companies. Talk to me about how Clarity AI intercepts with that and helps companies look at that. Clarity.ai has approached this very intentionally, right? I think AI is sort of tacked on to everything right now. And it doesn't mean that, you know, if you have AI, that it's good AI. And so the way that we've been, and we've been using it since 2017, since inception. So it's not like a tack on, it's not something that we're, you know, adding to marketing and such. Has it had a name last week? No, no, no, no, no. So we've had years and years to really hone the process and be conscious of how we layer that with expert, human expert sort of nuance on how to deal with data. And I think that's one way that we help is taking off this sort of burden around assessing AI providers, because we'll do all of that on the back end. We're constantly evaluating and then using it in a way that's really conscientious with our team of experts, essentially. I mean, it's interesting because right now, every time you wake up, there's a new AI. Each AI is leveling up on a regular basis. It's very hard for a business owner to look out on that landscape and say, you know, where do I jump into this fast moving river? Do I wait and wait and wait until the product's perfected? Do I jump in early and then in six months, I'm out of date? How do you help companies really work through that process and those risks?It's a good question because AI can be really scary, I think, when you don't understand it and you don't understand what you should be evaluating. Right now, it's not how powerful is the AI. Obviously, the model matters and we have rigorous ways that we check with evaluation harnesses, like an exam that you give AI to test their capabilities and their brains. But there's this trust component that I think is really important. It's maybe the most important thing for AI right now. And so if you don't have an AI system that's auditable, that's transparent, that people can bring to IC or they can get regulated using you. Like there's no chance that you could be a trusted provider. And I think making sure that people feel like they can understand how you are using AI and that it's replicable and that you are sort of guiding them through the process is a really, really important part. So do companies think it's really important to like be AI forward right now? Or are they trying to integrate AI in the background? And what are customers kind of looking for from companies in this? direction. I think right now, AI is something that people feel like they have to have, because if they don't, they'll be left behind. And I think that's true, right? But what matters more is the way it's integrated into workflows, for example. I think you see a lot of larger asset managers and institutional banks that are trying to build something internally, but they don't have the resources or they don't understand really what even parameters they should put around AI. So I think there's this public, you know, we must have it if we want to be competitive. And then there's the internal, how do we actually pull this off? And I think that's where we're seeing a lot of tension just around AI in general, because it's a huge risk for people to get it wrong. Yeah, especially if you're doing it internally and trying to build something that then gets replaced by Anthropic a week later and you've got to scrap everything you've done. Or if it makes a mistake. Right. And then your head is on the chopping block. Yeah, especially in the world that we operate in where SEC and FINRA is constantly looking, clients are constantly suing if you're getting something wrong. So it is a high risk area. Talk to me about native intelligence rather than traditional data provider and explain the difference. Clarity has been using AI since inception, so since 2017. That allows us not only more time to have familiarity with the different types of models, like how to layer in the different expert opinions and sort of decide what good is, but it also means that we have a really flexible architecture on the back end because we understand that that's what you need to have if you want to bring the best to the table. If you are a legacy data provider, you're bringing scores maybe to the table, you're bringing a data set, a list of metrics. If you are using AI, you're able to capture and update so much more. I mean, if you think about just the mass of information that's ingested daily to keep up with news controversies, new metrics, it's mind boggling and it's not something that a human team can do that at this point. using AI and being nimble around how to use AI, which is sort of the leader at the moment, allows you to take in a ton of unstructured, unstructured data, convert that into something that's usable so that end clients can actually make decisions based off of the insights. So talk to me about the ESG controversy over the last year or two, and has that shifted anything around the work that you're doing at Clarity AI, or how you're pushing out the product sets to Yeah, I think it's definitely affected the way that we present ourselves in certain markets. Obviously, as the head of North America, there's been significant shifts. There's, you know, different political backdrops to incorporate and to be considerate of. I think when you boil it down, if you want to call it ESG, you want to call it impact, you want to call it risk assessment or mitigation, it's still the same thing. And so I'm fine to, you know, discuss what still matters and it's really at risk at the end of the day, or alpha generation essentially. So I think the conversation, the ways that we're talking about it may be a little bit different, but the conversation in its essence is still the same. When you look at and you brought up investment performance, how does Clarity AI support investment performance and risk management? How are they covering kind of both layers of that?
So I would say on the one hand you have, if you look at a traditional team, they have a million subscriptions, all of this disparate data. They're spending so much time reconciling and collecting and sort of standardizing all of that. And it usually lives in a thousand different Excel spreadsheets. So on the one hand, Clarity is able to really lighten that burden and help get rid of operational drag. Right. And then on the other part, the other part that we're really excellent at is sort of making sure that you have all of the relevant risk insights that you might need. So if you're investing in areas where you have warehouses that need to be taken into consideration, are they in flood zones? Are they in fire zones?
As we're speaking today, are they in war zones? War zones, yeah, sorry. And then I think the addition of having everything in one spot really unified with this sort of analytics that we built on top of it really helps you not just understand what you're open to in terms of risk, but like, what do I do next, right? And that's the difference between falling behind and being able to bring yourself to the next level, I think. And when you think about the work that you're doing, do you think of yourselves as a support or plug-in for a human being who is ultimately making a decision, or do you see yourself as someone that ultimately replaces the human being? I don't think we're trying to replace people. I think the approach that we have is that we're trying to empower clients, whether they're at a company or they're an investor, to have the tools that they need to make a better decision faster. But we're not trying to reduce headcount, you know, it's not like a get Clarity AI and you can get rid of your team. It's more like you get Clarity AI and the headcount has multiplied but for a fraction of the cost. Great, great. So we delving into this whole AI world and you're an AI expert in a way. Do you see this disruption of jobs that's coming as something that will slow or do you see it as accelerating? Are you in any way nervous about how we're handling this disruption as a country or as a world? I think there's a lot to take into consideration right now. There's a lot of relevant topics that we could speak about that are happening right now. I think tensions are reaching a peak around labor and AI and whether or not people are going to face layoffs and what that means. I would say that, of course, that makes me conscious. It makes me feel like we have a responsibility to people because I worked in AI. But at the same time, it reminds me that technology shifts and that it ultimately ends up creating more opportunities. So the landscape might change. I think we will see a change, but we'll also have newer positions opening up. So there will be a sort of netting out, if you will. If you had to tell your kids today what job or what education they should be pushing towards in a world that's changing so rapidly, where would you steer them? This hasn't changed even since, I mean, before AI was really a big thing. I would definitely push them towards the arts. I think communication, interpersonal skills, I think emotional intelligence, those are things that AI is never going to be able to replicate, right? It's going to make us more efficient and it's going to make things more scalable. But it's not going to understand that human connection and it's not going to be able to replicate that anyway. So I think making sure that those skills are as sharp as possible will make you more valuable in a job market. Five years from now, the world that you're operating in specifically, what do you think changes the most? That's a good question. I think it will depend on how well we integrate AI into our daily workforce. I think we're seeing AI be rolled out at an unprecedented rate, but not all of it's high quality. It's my opinion that integrating AI into workflows, daily tasks, et cetera, is ultimately going to be what sets different firms apart, and it's going to sort of single out the leaders. And if you're not incorporating it, that's going to determine that you're a laggard. Not to be an alarmist in any way, but, I mean, changing times. I mean, ultimately, everyone got on the internet. So you would think the same thing here. Lillian, it's been a pleasure talking to you this morning. I learned a lot about Clarity AI and AI in general. Thank you so much for being on the show today. Thank you so much for having me. It's a pleasure.