Patricia Wu is joined by Alexandra Ebert, Chief AI and Data Democratization Officer at Mostly AI, at Money20/20 Europe to discuss the mission of Mostly AI, which focuses on making data more accessible while ensuring privacy through the use of synthetic data. This technology is particularly beneficial for large banks, insurance companies, and public sector organizations, allowing them to leverage their data without violating privacy laws like the GDPR.
Get the latest news and updates on FINTECH.TV
I'm here with Alexandra Ebert, chief AI and data democratization officer from Mosly AI.
Thanks so much for stopping by.
Thanks for having me.
So let's just start with mostly AI.
Tell us a little bit about it.
Well, mostly I was founded 7 years ago in Austria, but it's a global organization that really has this mission to make data more available to everybody.
We've developed the technology, synthetic data which helps large banks, large insurances.
Also public sector organizations to use AI to anonymize the existing customers, employee data assets, big credit card histories in a way that retains the value of this data but makes them perfectly privacy safe, which is a game changer when we think about all those banks out there trying to tap into the treasures of data without infringing privacy laws like the European GDPR or the California Consumer Privacy Act.
And all the banks right now are talking about agentic AI.
If I had a dollar or a Bitcoin, even better for every time I hear this term, so let's cut through the buzz.
Where are we really with agentic AI adoption and financial services?
I do agree that we see a lot of hype around this topic currently and every few months.
It's another thing.
Yes, AIA agents are in production today, but we see way more simple workflows that are being automated as opposed to what is in the realm of the possible, which would be.
For example, AI agents that help us to automate tasks which are way more complicated than just, uh, I don't know, filing a complaint or having a credit card automatically uh being canceled when it's stolen.
So we're at the very beginning.
There are a lot of challenges that still need to be figured out from a governance perspective, from a technical perspective, from a legal perspective, but I see great potential for this field.
All right, so let's see what is possible right now.
What is the most exciting news case you're seeing?
As mentioned, today it's more work flows, so personally I don't get that excited about workflows even though some employees are definitely happy that some of the more tedious processes where they had to use 15 different tools are not something that they have to do manually uh today anymore.
The exciting ones for me are too.
So on the one hand, when you think about bank customers, the possibility to have way more personalized financial advice and also taking the actions, for example, to make sure you have sufficient retirement savings.
Many people wear.
Watchers and similar wearables which help us to personalize health care and they think we are on the brink of doing something similar with the availability of synthetic data with the availability of AI agents to make sure that not only the wealthiest of clients but basically any uh customer of an organization gets tailored personalized financial advice to make sure that they're better off.
So that's one thing.
The other thing when we think about the employees of large organizations, banks, financial service providers is really this big question of.
On boarding, training, and making sure that this wealth of knowledge that usually exists between all the bright minds of an organizations but it's not necessarily readily documented and available for new joiners, for example, becomes digestible, becomes available because they have the 1 to 1 AI tutors and AI on boarders who help them to get the information they need at the right point in time.
So speaking of the employees, a lot of people are worried that their jobs will be taken.
That's definitely a case and I think this is why it's so important to make sure that people dissect this hype because as soon as we have this hype, we have fear and we have employees not being open to adopting AI technologies.
Of course we will see organization focusing on productivity gains, cost reductions, and layoffs alone, but I think that will be a race to the bottom.
Organizations are much more think about if they want to have a sustainable business advantage even in 5, 1015 years, uh, time horizon, how they.
Can use these new capabilities to recommend their employees and to help them to, as we discussed in the panel earlier not only focus on the super urgent things but also on the important things that are on there hey big impact, but I never get my time with juggling all the different projects I'm currently working on to get them to actually have capabilities to do that or even if you're a business user who doesn't have a uh green coding to just prototype your ideas and have something tangible to reason upon.
As opposed to just talking about the next big invention, so so much potential, but organizations really need to set themselves up to read that and not only reduce costs.
So how do they do that if you were to give one piece of advice to these organizations as they embark on this journey, what would it be?
There's not this like secret sauce and do one thing and you will be well off, but if I can really only pick one for me, it would be AI literacy educating your employees because everybody will be using AI tools in the future, educate.
Them about what AI actually is what it is not, what limitations are and how they can most effectively use it, I think will be a big, big game changer because organizations see that employees that know what these systems can do as opposed to having beliefs of OK, this solves now every problem.
I can lean back and it will do my work they're seeing way better results if employees have this knowledge so AI literacy and education I think is the way to start.
OK, so thinking about the potential.
What are the challenges that are standing in the way?
I mean, you mentioned governance before.
I imagine that's a big one, yeah, absolutely.
So definitely governance challenges in our panel, we talked about where we are currently on the train of agentic AI.
If you just think about governance and making sure that nothing goes wrong, it's obviously way easier if you just have a train which runs on its tracks as opposed to, let's say a car which runs on a highway and has more degrees of flexibility or a drone which is in the air.
Space and can move in all the different directions.
AI agents are complicated.
They have some degree of autonomy and they can also adapt and governing that is a challenge.
One thing I think that's super, super interesting is this big kind of chicken neck problem of saying we have the potential to use AI for more complicated tasks, but for AI to be able to learn that it actually should sit on our shoulder or see our screen all the time when we're working to learn how we're actually doing it.
And that's something that many people rightfully feel wary about and also some employment laws will definitely put a stop in there.
So how to balance that and how to figure out how to safely train those systems while making sure that we adhere to current laws will be a big challenge until we go in these more promising spaces of what is possible with AI agents and not just the work flows that we see today.
So it's an exciting future but definitely still evolving absolutely.
Thank you so much, Alexandra.
