Feb. 10, 2022

#60: Cal Al-Dhubaib (Pandata)

Cal Al-Dhubaib — Founder & CEO of Pandata - on the design and development of human-centered, trusted AI solutions that drive business transformation.

Our guest today is Cal Al-Dhubaib — Founder & CEO of Pandata.

Pandata helps innovative organizations design and develop human-centered, trusted AI. They are a team of creative thinkers, relationship builders, and problem solvers from diverse backgrounds, both professionally and personally. They believe that diversity leads to Trusted AI.

Their high-performance AI solutions drive business transformation while addressing privacy, fairness, and transparency.

Cal is a data scientist, entrepreneur, and professional speaker on AI. He founded Pandata on the core values of "Approachability and Ethics". Empowering organizations to design and develop AI solutions that grow their bottom line, Pandata has overseen 80+ transformative projects with leading global brands including Parker Hannifin, the Cleveland Museum of Art, FirstEnergy, and Penn State University.

Cal is especially passionate about the ethics of AI and how organizations can orchestrate the right talent to support AI initiatives. Cal has been recognized as a Notable Immigrant Entrepreneur, Crain's Cleveland 20 in their 20s, and two-time Cleveland Smart 50 recipient. In addition to becoming the first data science graduate from Case Western Reserve University, Cal is also known for his role in advocating for careers and educational pathways in Data Science through workforce development initiatives.

Learn more about Pandata: https://pandata.co/
Follow Pandata on Twitter: https://twitter.com/ohpandata
Follow Cal on Twitter: https://twitter.com/caldhubaib
Connect with Cal on Linkedin: https://www.linkedin.com/in/dhubaib/

--

Stay up to date on all our podcasts by signing up for Lay of The Land's weekly newsletter — sign up here.

Transcript

Cal Al-Dhubaib [00:00:00]:

This is why trustworthy AI is so important. So the discipline of trusted AI is essentially building machine learning systems and AI systems that intentionally consider bias, intentionally consider privacy and how data is being used to build these systems in a transparent way and focus it on the ultimate value being created for humans and the ability to audit and on understand. If we see a pattern that we don't trust or we want to question, we understand where it learned that from, where it picked it up from.

Jeffrey Stern [00:00:32]:

Let's discover the Cleveland entrepreneurial ecosystem. We are telling the stories of its entrepreneurs and those supporting them. Welcome to the Lay of the Land podcast, where we are exploring what people are building in Cleveland. I am your host, Jeffrey Stern, and today I had the pleasure of speaking with Cal Al Dubabe, who is a data scientist, an entrepreneur, and an AI expert. Cal is also the founder of Pandata, an organization which he started to help other organizations plan, design and scale humancentered AI solutions that grow their bottom line. Pandata has overseen 80 plus transformative projects with leading global brands including Highland Software, Parker, Hanifin, the Cleveland Museum of Art, First Energy, and Penn State University. Cal is especially passionate about orchestrating inclusive teams that are empowered to build trustworthy, fair, transparent, and private AI solutions under the umbrella of a concept called trusted AI, which we will explore much deeper in our conversation. Cal has been recognized as a notable immigrant entrepreneur crane's Cleveland 20 in their twenty s and a three time Cleveland Smart 50 recipient. In addition to becoming the first data science graduate from Case Western Reserve University, cal is also known for his role in advocating for careers and educational pathways in data science through workforce development initiatives. Please enjoy my conversation with Cal Al Dubai.

Jeffrey Stern [00:02:08]:

So I'd love if you could start really with, you know, your your path to entrepreneurship, your interest in AI. You know, tell tell us your story and and how it is you got here.

Cal Al-Dhubaib [00:02:18]:

Yeah, it's been an interesting path, to say the least. But just to kind of my technical training when I was in college, I was studying computational neuroscience and working in healthcare research, and I became enamored with data, working with questions like, how do you keep a population healthy? What insights could you get by looking at health records? And I found myself around 2013, 2014 entering the field that we now know today as data science. Back then, there weren't a lot of data scientists, and it was an emerging field, and a lot of people were trained in one thing and then choosing to call themselves data scientists. So it was a really fun time. And I got involved in startups just by being active in the research space and being involved with at the time, it was called the Blackstone Launch Pad. At Case Western. I met Bob Sopko there at one of the research fairs at Case Western. And he said, have you ever thought about commercializing this? Well, land. The rest was history. But I fell in love with data and the patterns and the things that you could do. And so by trade, I'm a data scientist, and by trial, land error and learning the hard way, I became an entrepreneur.

Jeffrey Stern [00:03:35]:

And maybe just to denote what it means. What is data science to you?

Cal Al-Dhubaib [00:03:41]:

Well, that's a great question. What did it mean back then versus what did it mean now? It was kind of the Wild West, and many of us now are familiar with dashboards and interactive ways to visualize data. I mean, in 2013, 2014, very few of these tools actually existed. So originally I got into the field when data science was simply the discipline of being able to make sense of data. Today, we've gotten a little bit more structured. Data science is more closely affiliated with machine learning, the discipline of being able to extract patterns at scale from data sets and start to build intelligent automation.

Jeffrey Stern [00:04:19]:

So maybe just setting the scene for Pandata, I always like to hear, what was the founding insight? What were you focused on from a problem perspective? What intellectually drew you to the problem space that Pandata ultimately has become to solve?

Cal Al-Dhubaib [00:04:36]:

So I'll tell you what Pandata does today, right? And I'll back into how I got there. We help organizations design and develop machine learning and AIpowered solutions. And we're very focused today on the concept of trustworthy artificial intelligence. So building automated systems that are transparent, ethical, fair in the service of humans. But how did we get there? I initially started with my first venture, a company called Triple Analytics, where I was dabbling in artificial intelligence applied to medical records. Can we use machines to automatically extract patterns and insights that identify unique treatment pathways that make sense for an individual based on their own individual characteristics and medical history? At the time, this was a relatively novel concept, but a really difficult problem to tackle. And I was a new entrepreneur and I didn't have a lot of experience in product. So ultimately that company failed. But I got to do some really cool things as far as partnerships with major healthcare systems and started working with some real world medical data. The one thing I kept hearing over and over again from the clinicians I was partnering with is we have data and we don't know what to do with it. So we offered to do some sorts of research pilots to try to get them to create partnerships with Triple Analytics. Of course, there's no money involved. They weren't paying for us to do that work, right. I was trying to get access to the data to build some cool models. When Triple Analytics ultimately didn't work out, I had to end some of these research partnerships, and that upset some of these clinicians that to them, that was the most valuable aspect of the relationship. And so I asked the question, would you pay for this? Just the service? It never occurred to me that that.

Jeffrey Stern [00:06:37]:

Was actually great question.

Cal Al-Dhubaib [00:06:42]:

And so that was the birth of Pandata. I was like, we have data. We don't know what to do with it. And as it turns out, it wasn't just hospitals that had that problem. It was organizations like Parker, Hennesson, highland software, first Energy, among some many other clients we've worked with, including the Cleveland Museum of art. And so it all started with, we have data, we don't know what to do with it. And this frustration of, wow, there's some patterns we really wish we could key off on or understand or help drive decision making. Land over the years, as the maturity of the industry has continued to evolve, and as more data scientists have become prevalent, the nature of the problems we started focusing on were leaning more and more specialized into machine learning, building large scale systems, pattern recognition on complex data sets. And as machine learning itself evolved and matured, we really started to notice that there is a lot of unintended consequences and issues. Machine learning has went from the stage of pilots and experimentation to now being in production. And that comes with a whole host of issues. And so that's how we went from, we have data, we don't know what to do with it, to, hey, we're experts in machine learning, to, hey, we help you design and develop machine learning the right way.

Jeffrey Stern [00:07:58]:

Right? And from the company perspective, was it, in those early days, an exercise of just asking people like, hey, do you have data that you don't know what to do with? Or what was kind of like the first break, if you will?

Cal Al-Dhubaib [00:08:12]:

There was a severe lack of data science talent in Northeast Ohio when I started Pandata, if you did a search on LinkedIn, a data scientist, or data analytics, you would see fewer than 150 names. It's crazy. All of Northeast Ohio, I looked at Cleveland, Akron, and the sum of all titles in that, and a third of them worked for either Cleveland Clinic IVM or Progressive insurance. So, one, we clearly didn't have enough. Today, that number is getting closer and closer to 1000. We still don't have enough. But the first big break was actually a partnership with a nonprofit called digital C. Land they were early, earlier in their creation, were very focused on data literacy and digital literacy. So one of the programs that they wanted to get off the ground in partnership with us was a data science boot camp. And that was Pandata's first big break. A client actually paid us to put together an end to end program to help create and produce other data scientists. And that gave us access to work with professionals from a lot of different companies. When we started to branch out from there, Pandata has really grown largely through word of mouth over the past five years.

Jeffrey Stern [00:09:29]:

And I think maybe just for myself and also for others, it will be helpful to just kind of understand the larger state of data science and AI today and maybe some of the history like how we got here. Because I think in practice, some of us have some exposure to it. AI in consumer products. Netflix on the low end of the spectrum with a recommendation to very high implication AI, where maybe it's like enhancing worker capabilities or even replacing human decision making, maybe at the far end of that spectrum and just like how you kind of see the space today, what's the size of it? And then we, I think, can get into exploring also some AI land data and we'll tie it back to Pandata.

Cal Al-Dhubaib [00:10:16]:

Sure, that's a loaded question. The history of AI in two minutes or less. In 2010, it was all about pipelines. People couldn't get to their data. Data was trapped in like, ERP systems, land, customer revenue management systems that just simply weren't built for the purpose of pulling all this out to do modeling. Like, all these systems were built to support very transactional purposes. Hey, I have a customer record. I'm going to put it in. I'm going to go look at the customer record now. They weren't built for, hey, maybe one day I might want to look at millions of data points related to these customers and then do some weird things like predict esoteric attributes that even these companies building these systems could build. So it was all about building pipelines. And believe it or not, in the 2010 and 2015 range, just being able to build warehouses that could house this data at scale and allow people to just query it, that was a hard task so early in the evolution of machine learning and AI. That was the focus. Then there is this race of, okay, now we have the data, what do we do with it? We don't know land. That's when Stern days started, right? We have data. We don't know what to do with it. It's like, well, we build all these pipelines. Okay, what questions are we trying to answer? And so folks were starting to figure out, okay, how can we look at data? How do we look at it without drawing the wrong conclusions? And as the compute power became less of a barrier to being able to build more and more sophisticated models, we started to really get good at prediction. In fact, there's a famous case where Target was sending targeted ads and they could predict that someone was pregnant, right? This was a big news headline before the person even knew they were pregnant. Yeah, it's kind of crazy how accurate we could get with prediction with the right scale of data. And so building models was the next big phase. And now we're entering this territory of, well, we've realized that machine learning and AI have gotten quite sophisticated, quite powerful. We've reduced the barriers of being able to build AI tools on things like voice, images, video, natural language, you name it. And we're starting to ask the question of, well, what safeguards do we need to put around it? So we're kind of almost taking a step back and realizing there's a lot of safety and risk management that has to go along with building successful AI systems. And so that's the state of the industry today. Land you asked me a question of how close are we to full automation? Not at all. In fact, I find artificial intelligence to be deeply problematic because it implies automation. In fact, the most successful systems are ones that are like, how do we intentionally design a system to cut through the noise and make a human's job easier and focused on the more creative, complex work?

Jeffrey Stern [00:13:16]:

Yeah, no, the augmentation makes a lot of sense. It's what I also spend a lot of time thinking about just the nature of what we're doing at actual. One thing I really want to explore more because I think it's also more in your wheelhouse is this concept of trusted AI. And as I was reading about just the work that you're doing and preparing for this, I remembered this like, Microsoft fiasco on Twitter a few years back. And so as I remember it and you can add some color here, but over time, as you feed more and more data to these systems, there are biases that can emerge and they can start doing crazy stuff. And I think we've seen this play out in a lot of the text generation models, but Microsoft had this notorious Twitter bot that started getting fed more and more data and it really became horribly racist very quickly. And so maybe these kinds of things happen. And I think it was maybe a good wedge to talk about trusted AI and why is it that inclusion matters in this space. Land how you think about that overall?

Cal Al-Dhubaib [00:14:32]:

So not only was it racist, but it also became very mean. It was just rude to people. So let's talk about the purest definition of artificial intelligence. It's essentially software that excels at recognizing and reacting to complex patterns, and it can produce new and novel results in situations it might not have explicitly seen before. Instead of giving it hard sets of rules, it's kind of learning to infer rules related to these patterns. So chat bots, where it's actually creating text, it's nothing more than a parrot today. It's saying, okay, well, this is the next most likely word that makes sense here. The problem that we have when we're building these models at scale is a chat bot that really seems very human like in its output, is typically exposed to millions of examples of sentences or things people have said, billions of documents. The most powerful text generation algorithm that exists today is called GPT-3, and you may have heard of it. It's been in the news a lot lately. It's used to power a lot of these creative assistant type tools. Land that was exposed to billions of documents that spanned everything from just generic human knowledge to blog posts to snippets of code. It's near impossible to actually audit and understand everything it's been exposed to. Land the level of potential toxicity or correctness or lack of correctness of the examples, it seem so all it's doing is it's learning these patterns. And if it sees enough of a certain type of pattern, whether or not that pattern is good or wrong, it learns to recreate that pattern or key in on it. And the way we see AI go wrong, I mean, there's no shortage of examples. Tay was one, but something a little bit less extreme than a mean racist chat bot that still nonetheless had catastrophic outcomes was healthcare in a healthcare system. This was a few years ago now. There's actually algorithms used widely across healthcare systems that impact over 100 million patients a year that are used to predict things like how likely is this patient to be readmitted or have additional complications. And hospitals use this to prioritize who should be getting more care, who should be scheduled for follow up to try to reduce these outcomes. Gray area makes sense to use machine learning for this. What ended up happening was, in this particular case, white patients and black patients that were equally at risk. The black patients would get a lower risk score, effectively putting the white patients ahead of line for prioritized care, and black patients ended up having worse outcomes. How does this happen? The pattern that was learned in this instance was people who come from certain zip codes historically have fewer visits. Land so it learned fewer visits equals less risk and attributed that outcome to that group. So when we talk about building these models that recognize patterns at scale, and we have very limited tools to inspect and interrogate what patterns it's actually learning, this is where we start to enter into the danger territory. So this is why trustworthy AI is so important. So the discipline of trusted AI is essentially building machine learning systems and AI systems that intentionally consider bias, intentionally consider privacy and how data is being used to build these systems in a transparent way. Land focus it on the ultimate value being created for humans and the ability to audit and understand. If we see a pattern that we don't trust or we want to question, we understand where it learned that from, where it picked it up from a.

Jeffrey Stern [00:18:33]:

Few follow ups on that.

Cal Al-Dhubaib [00:18:34]:

The can of worms.

Jeffrey Stern [00:18:36]:

Yeah, no, it's it's very it's very interesting, though. So there's these ideas of fairness, transparency, privacy. From the transparency perspective, is it that most systems today are opaque that are implementing this machine learning and AI at work? Or is it that it's hard to like if you were to try and figure out where the bias comes from, is that hard to do?

Cal Al-Dhubaib [00:19:02]:

So just to give you a simplistic example of a model, how a model might work, let's say we're trying to predict house prices. Land all we have is square feet. All right, we try to build a model that looks at house prices, land the number of square feet land tries to come up with an estimate of okay, for each square foot, how much should I add to the price of the home? That's a very interpretable, explainable model. If we give it like 1000 sqft, we'd expect to see a certain answer. We can make that model a little bit more complex. We can give it square feet and maybe a zip code. So we add more and more things. We can still back into that. Think of these as features that span a spreadsheet. Imagine if you had a thousand of those. Imagine if you had a million of them. Imagine if they weren't as clear cut as square feet and zip code that are things that a human could interpret. Maybe it's some weird complication that a model has decided as, oh, if I multiply a square foot by whatever and add to it these other weird features that I found over here, it helps me predict the price with a lot more accuracy. As we've gotten more and more complex models and gotten better and better at more sophisticated feature engineering, our models have gotten more accurate but harder for humans to interrogate. So it might tell you, this is why I predicted this, but it ultimately would make no sense to even the data scientists training the model. So that's why we have an issue with the opacity of models. It's not that we're intentionally designing them this way. It's the ability to build more accurate models has come at the expense of being able to explain them. And so now a lot of energy and a lot of really smart people are working on can we build more accurate models that are also equally like interpretable or explainable? Or can we back into some of trying to understand in human language why a model is making the decision that it's making? In fact, the EU right now is proposing legislation that's going to require certain levels of transparency depending on the severity of the application and the consequence it might have on humans.

Jeffrey Stern [00:21:13]:

Yeah, I imagine the regulatory side also has some implications, and I'll ask about that in a SEC, but just to kind of round out trusted AI. Land maybe this is almost like a philosophical question, but what is fairness? Like, how do you discern that in a model?

Cal Al-Dhubaib [00:21:31]:

I point people to this YouTube talk that talks about 18 different definitions of fairness, most of which all contradict each other. I'll give you the short answer. I'll save you some time. It's a great hour long lecture, but you don't need to watch the whole thing. At the end of the day, we have to define fairness mathematically, and different definitions of fairness can even contradict each other mathematically. There's no universal definition. It depends on a case by case situation and having the right decision makers and stakeholders that are representative of the humans that an algorithm might impact coming to consensus over what we're going to determine as fair in this specific situation. So it's not cut land dry. There's no hope of automating this away. It requires people, and it requires people that look like the people the model might impact. And that really matters. So, long story short, it really does depend.

Jeffrey Stern [00:22:32]:

All right, just one or two more macro questions here and we'll bring it back to Pandata. On the regulatory front, though, I think it's interesting from a regulatory standpoint because often what I've seen is that regulation follows very acute specific examples that everyone kind of latches onto the narrative around something like the racist Twitter bot from Microsoft or the healthcare example you described. And so just like, what is the regulation look like today and how do you see it affecting the landscape?

Cal Al-Dhubaib [00:23:08]:

The regulation is nonexistent today. And this is part of the problem. We've been able to do things and we now have companies deciding, should we do this? Is this fair? Is this right? We're going to put together a little committee and then they're going to decide for us. But there's not been any legislation that's come out yet. GDPR is something a lot of people are now familiar with. That's the global data protection and regulations out of the EU. And it's this notion of you have the right to be forgotten. You have the right to understand how your data is being used. It was initially implemented with a focus on EU citizens, but they told American companies, if you have data on EU citizens, we'll come after you and find you. This caused a lot of multinational companies to react land change their and scramble to change their practices. And now we all know when we go to a website, we see the thing that says Accept all cookies. This is what's being collected, right? That was the actual impact of that legislation we're seeing years later. California has since adopted certain laws, and other states are following suit. We're now seeing EU do the same thing again, but with AI and specifically focusing on how risk should be looked at with respect to AI systems, right. There's designations like this is going to impact life or death, or this could impact fairness or treatment, or this might have no impact at all land they're stipulating certain levels of controls and audits and requirements like how explainable can models or should models be in these situations. And there's some pretty hefty fines associated with violating these rules. In an earlier version of the proposed legislation, up to 6% of a company's annual revenues. They're not messing around here. So what I suspect is going to happen, some version of that law is going to come out and we're going to see the same thing happen in the AI space that happened with the broader data space with GDPR. If there's one lesson that we can learn from that is the companies that weren't prepared for it and didn't take it seriously ended up getting fined and eventually found their way there in two to three years or ultimately abandoned the tools and solutions they were building. So if there's one AHA from this, like, people dabbling in the space is starting to build for the fact that this is coming and there isn't this Wild West unlimited runway in the machine learning and AI space anymore.

Jeffrey Stern [00:25:30]:

Right. And so in a lot of ways, as I am understanding it, you've essentially positioned Pandata to be ahead of that.

Jeffrey Stern [00:25:39]:

Wave that is coming and helping folks.

Jeffrey Stern [00:25:42]:

In some ways, prepare.

Cal Al-Dhubaib [00:25:43]:

Indeed. Right. How do you build AI systems the right way? There's the quick way to, hey, we're going to build a model that predicts some things and we're going to put it in production. And it doesn't necessarily guarantee risk, but it opens you up to some risk. What we do is we help organizations think about all these considerations that help keep them safe and understand when, hey, they're building this in this way on these data sets, if you're not putting these safeguards in place, could open you up to risk down the line.

Jeffrey Stern [00:26:11]:

So take us from the digital C days to today, what has transpired in Pandata's life here? And how do you talk about the state of the business as it is right now?

Cal Al-Dhubaib [00:26:22]:

So I'm really excited about where we're headed this year, but I'll kind of give you the stages of evolution from like an entrepreneur's perspective. The first couple of years of Pandata felt it was a little bit more like a solo preneur with a few resources that I was working with on the team. Our CEO, Nicole Ponsingle joined the team in, I want to say, 2015. And that's really when we first started to have some data scientists that we were hiring, bringing on. And instead of just operating as an independent consultant, we were actually operating as a team on projects. And that was exciting. Pandata grew quite a bit from 2016 to 17 and we really started maturing our practice. As we shifted into focusing on machine learning and AI, we've gotten a lot better at having standard offerings like Discovery and Design, where we have a standard repeatable process to help organizations quickly prototype AI capabilities. And we're reducing the complexity involved in getting models into production. So I'm really excited. We just formed an exciting relationship with a company called H 20. It reduces the barriers to building and monitoring transparent models in production to build intelligent apps. And we're working on a really cool partnership targeted at startups in the middle market. So a lot of this year is going to be focused on bringing that to a larger audience and getting more startups, building these cool tools and intelligence automated processes.

Jeffrey Stern [00:28:03]:

Can you give us an overview of what have been some of the projects and deployments you've worked on? Maybe some of your favorite examples of companies you've helped and what they're doing and what the outcomes have been?

Cal Al-Dhubaib [00:28:17]:

Yeah, one of my favorite projects was with the Cleveland Museum of Art. It's always cool because when you talk about data science and AI and then a museum, you're like, what's the connection there? We're blessed here. It's the second best art museum in the country, second only to the Met, and it's 100% free. So what that means is fewer ticketed opportunities, fewer opportunities to understand where guests are going in the space and how they interact with the art. Our museum also invests a lot in technology, like the interactive space, an Art Lens Gallery, if you've been there and seen that, but also some of the more recent exhibitions that involve virtual reality. So the biggest question on their mind is, is this working? And so we we helped them leverage the data in their WiFi system, understanding how individuals were navigating the space. And we came across some really cool insights, like people who spent five minutes or more in the Art Lens Gallery space were likely to spend up to an hour longer on average in a museum than people who didn't. So it was a really cool project. It involved some messy data. We're working with WiFi and Kings off of Routers, but we're translating that into a story that's understanding where people are going in a privacy preserving way and how long they were spending in different galleries. And that gave us some really cool insights. Another one that I love is in the healthcare space, and we were working with a company involved with health insurance claims, and they try to identify patients that could qualify for better subsidies from the government. And they basically mine this data and identify patients that could qualify. They help the patient get access to better care, and they help the insurance company cover it so everybody wins all around. And we help them basically translate 30 million or so patients and their medical records into a powerful tool that can identify with a lot more precision which ones would qualify for this, especially as regulations change, and especially as different judges or case managers reviewing these applications might react to the applications. And we also did it in a way that helped them understand which minority groups were having harder times getting through the process. So we ultimately helped them grow their revenue in that business line by over 30%, by helping them identify people with greater precision.

Jeffrey Stern [00:30:49]:

In your experience, kind of working with the breadth of companies that you have, do you think the. Winners in this world will be the ones that have been more like AI first companies from inception or older companies sitting on data, trying to figure out a new way of thinking and incorporating.

Jeffrey Stern [00:31:08]:

It into new models.

Cal Al-Dhubaib [00:31:10]:

Well, the good news is the data to be able to build almost any type of model can be acquired now with such fewer barriers. In fact, in some cases, you don't even need to build or train your own models. There's a lot of building blocks, land design blocks out there. I don't know that it makes a difference to be an AI first company. It does make a difference to be a data first company and understanding that ultimately, whenever you're investing in building an AI system, it ultimately depends on the quality of the data that the model, the building block was originally trained on and how much you can trust that. Winners in this space also understand that this is an experimental process, right? AI. Isn't a one and done thing. It's a portfolio mindset. What I mean by that is when you're building a model, let's say predict likelihood of a customer to churn or a recommendation that might create more engagement or any number of things, right, you can't guarantee 100% performance. You might at best be able to say this works a little bit better on this type of situations and a little bit less reliably in these types of situations. But expecting 100% just doesn't work. So companies that understand that and understand how to build around the failure cases, okay, is this still usable? If it fails, what do we do? Does this open us up to any risk? If it fails, how do we handle that? And if a project ultimately fails, what do we learn from it? And what are we going to invest in next instead of running away from it? So these are common traits of organizations we see that use AI successfully.

Jeffrey Stern [00:32:47]:

So I imagine that you now also sit on a lot of data having worked with all these companies. And I'm curious if you've asked yourself the question, what do we do with it? Has Pandata is the path in your mind the trajectory to continue to kind of work on projects or is there an opportunity to kind of productize some of what you've learned over the past few years?

Cal Al-Dhubaib [00:33:13]:

It's an interesting question. We've definitely seen some repeat pain points today. There is a huge explosion in AI powered tools. So for now, we're staying out of that race. We don't actually own any of the rights to any of the data we work with for our clients. So, interestingly enough, Pandata has like the tiniest amount of data that we have in our database systems, but we're seeing like constant patterns. So if there were an area we would double down on, it would be AI applied to voice tech. There is so much of it out there that is untapped and there's so many cool things that you can do with it. If you have a teeny tiny call center that handles 10,000 calls a year, which is relatively small in the grand scheme of things, lasting three to five minutes each, that's over 1000 hours of customers telling you exactly what they want. And companies only ever analyze a fraction of it for quality control purposes. So that's just a scale of data and insight on consumer behavior that's exploding these days.

Jeffrey Stern [00:34:20]:

Yeah, can imagine. So you mentioned kind of looking forward in the year. There's a handful of things that they are looking forward to excited about. What are some of those things?

Cal Al-Dhubaib [00:34:33]:

Well, I'm really excited about this relationship with H 20. Typically, this is a tool that's been available to enterprises that have large data science teams and a big barrier for a lot of startups. Getting involved in the AI space is the cost of being able to access a data scientist hiring one, having the right tools to accelerate the time to delivering on value and having this unique partnership that's focused on serving startups. Specifically, I'm really excited to see what we do with some of our startup clients, but also getting to reach more startups that are interested in building these types of tools to add competitive advantage to their arsenal.

Jeffrey Stern [00:35:11]:

I know it can always be a little bit challenging to look too far into the future, but ultimately, when you think about the vision for the future of Pandata, what is the actual impact that you hope to have looking back.

Jeffrey Stern [00:35:28]:

Land have accomplished with the company?

Cal Al-Dhubaib [00:35:30]:

So clear, simple with our ten year big Harry Audacious goal in our company, I don't know if you're a fan of EOS or heard of it, the entrepreneurial operating system. So we've got our ten year behated. It's creating a billion dollars in value for other organizations. So we hold ourselves accountable in projects of documenting, all right, what was the impact here? What was the economic value, what were the intangible values and how did we help? Our goal is to have one of the most diverse data science teams. We really believe in the power of if you're going to build trustworthy AI systems, you need to have diverse people at the table. So that's something that we're making a big dent on as well. And I want to look at Pandata in ten years from now, let's say we've continued to grow. How cool would it be to have a group of 100 and most diverse data scientists that have helped collectively create a billion dollars in value for other organizations over the course of the years? If there's anything that I've learned from growing pan data, is the state of AI changes year to year. In fact, there's things that are just true today that weren't true six months ago. And we spend a lot of time and energy staying up to date with the field. So it's really cool being in a position where we're constantly helping other organizations navigate the changes. I get asked a lot. Can you imagine your job being automated away? No. There's so much work to be done, and figuring out the one guarantee I have is going to look totally different in two to three years from now than it looks today. And that's kind of what we're in the business of doing and staying ahead of.

Jeffrey Stern [00:37:08]:

It's. One of the questions I had for you, actually, which is knowing how quickly the space changes. How is it that you stay abreast of all the stuff that you need to? How do you kind of parse signal from noise as you make your way through the developments? Land what do you wish that you understood well today that you don't about the space?

Cal Al-Dhubaib [00:37:31]:

There's things that I wish I was doing more of. I wish I was getting more involved in the modeling. And this is something I've had to kind of step away from. An area that I just am enamored by is Natural Language Processing and generation. I've got great team members that are staying up to speed with the advances, but that's an area that I wish I could personally dive a little bit deeper into. I think it's cool being able to understand the puzzle of how we communicate and then how you do cool things with it with a program. But something that I theorized earlier on when we were growing Pan Data is we really need to limit and cap how many hours we're spending on projects. Land you work consulting, practice. At the end of the day, we make money when we're working on projects, we have the billable hour. It's the enemy. I hate it, but it's kind of how we have to have these transactions run with our clients. And whether we do it fixed fee or not, there's still this effort. So this idea of our hours is our inventory land therefore, our hours is our revenue. Land, then saying something as crazy as we're going to limit it at a certain amount for each consultant across our team so that we always have space to stay abreast of all the changes, invest in taking courses and attending conferences and understanding all the latest and greatest happening in the space seemed a little bit crazy. But that's part of how we've been able to grow Pandata over the years. We exclusively hire associate data scientists and train them up from within. We make it a part of our culture that ongoing learning and education is a part of what we do and a part of thriving here. So if you want to be doing the same thing over and over again, you're going to get burnt out. But if you want to constantly be learning and growing, we created the right environment to allow that to happen. So that's part of how we stay ahead of it. And we've managed to work it out so that we can still be profitable while allocating the time to learn and grow. And what that means for our clients is we're always able to help them stay ahead of these changes and sift through the noise themselves. Yeah.

Jeffrey Stern [00:39:36]:

One of the other questions you keep getting ahead of me on these, which is awesome, is how you think about scaling an organization like your own.

Cal Al-Dhubaib [00:39:45]:

So most of what Pandata has done up until now has been word of mouth deliver on value and continue to find new ways to create value for the organizations we're working with. Invest in education. I spent a lot of time going to conferences and talking, sharing lessons learned, sharing how we failed, how we failed with clients and how others can avoid these. And by doing that, we were able to grow and attract and work with new clients and continue working with the same clients that we have had in new ways. We have got a lot of clients now that we've been working with for two to three years. The one thing I wish I had done more of earlier on was recognize the importance of partnerships. So this partnership I've been talking about with H 20, it's really key. If I could go back in time a few years and do one thing differently, it would have been attach ourselves to a racehorse. That is a value add tool that helps make machine learning easier and become experts in that tool or platform sooner and help other organizations use it to create value instead of kind of reinventing the wheel over and over again. I really see this helping us scale at a much faster clip than we have been over the past few years.

Jeffrey Stern [00:41:02]:

I want to book end our conversation here with a few other questions.

Cal Al-Dhubaib [00:41:07]:

Sure, of course.

Jeffrey Stern [00:41:08]:

Tangentially related. I know you have this involvement with the Entrepreneurs Organization.

Cal Al-Dhubaib [00:41:14]:

Yeah.

Jeffrey Stern [00:41:14]:

Can you just tell us a little bit about that and kind of its role in your journey?

Cal Al-Dhubaib [00:41:19]:

So I am an engineer by training, and my dad's an engineer, and I have no one in my family that's an entrepreneur. I didn't even know it was an option to start a company when I was in college. It was Bob Sapko tapping me on the shoulder saying, have you thought about commercializing this? And I said, sure, I can start a company. Yeah, why not?

Jeffrey Stern [00:41:38]:

I love Bob.

Cal Al-Dhubaib [00:41:40]:

He's a great guy, and I attribute all this mess to him also. That being said, I really had no one in my sphere that was an entrepreneur. I didn't know entrepreneurs. I didn't have mentors. In my first venture, I participated in this competition called GSEA Global Student Entrepreneur Awards, and it's now been featured on Disney Plus. Really cool. It's a competition by EO Global network of 14,000 or so entrepreneurs worldwide that had businesses over a million dollars in revenue. I started there as a student entrepreneur, got to represent the US. On the global stage. Top 40 out of 2000 students worldwide. Land I got to meet some really cool people. Many of them had multiple businesses that were really large, and it was inspiring to just hear their stories. Land absorb from how they think, how they processed, how they thought about entrepreneurship. So fast forward. I joined EO Cleveland's Accelerator program that helps organizations grow over that million dollar mark. And a year ago, I graduated and joined EO and have started to lead the Accelerator program, which is really cool for a lot of reasons. One, we started the year with 80% of our group being women owned businesses, and this year, we're on track to graduate the single largest number of women owned businesses over that million dollar mark. I feel so passionate about my work there because when you transform the businesses that are growing and scaling in a community, you have a direct and material impact on their employees and the people they employ in the communities that they come from.

Jeffrey Stern [00:43:14]:

And they have a large presence here in Cleveland.

Cal Al-Dhubaib [00:43:17]:

Yeah. We've got a little over 100 members in the Cleveland chapter, and then we're now approaching 40 members in the accelerator program.

Jeffrey Stern [00:43:27]:

Amazing.

Cal Al-Dhubaib [00:43:28]:

Yeah. And it's an organization that's built around this idea of we learn from each other. We bring in once in a lifetime speakers with great stories to tell, foundations of share from experience, and it's completely transformed the way I think about business and the way I approach problem solving. Right. You get to absorb the wisdom of others in a faster way than trying to figure it out the hard way. Land it's really great to hear that other people were exactly where you were stuck and comforting to know that it's totally, like it's natural and normal. So it's been a great tool to help me leapfrog pan data. We certainly wouldn't be the size that we are today if I hadn't joined EO.

Jeffrey Stern [00:44:16]:

Yeah. Other people have figured it out, and you do not have to reinvent the wheel.

Cal Al-Dhubaib [00:44:23]:

No. Especially when it comes to, like, a business. A business is a skeleton. What you do might be a little bit different at the end of the day, but running a business is pretty much the same no matter what you're doing.

Jeffrey Stern [00:44:34]:

Yeah. So with that, we'll kind of wrap it out. What have been some of your learnings in your entrepreneurial journey as you kind of reflect on the company building process? What are the things that you're taking with you today?

Cal Al-Dhubaib [00:44:50]:

I've got a whole list. I'm going to give you my Cliff Notes of my favorites. Learn to say no. That was one of the most valuable lessons. And I keep learning this lesson again and again over the years in just different ways. When you say no to the wrong clients, you say yes to a lot more of the right clients. When you filter out noise up front, you get to spend more time doing the things that matter and make a difference. Another thing that I learned is the value of being vulnerable. Nobody's trying to steal your secrets. If you waste time trying to put up a front of, like, you have it all figured out. Land don't ask for help when you need help. Guess what? You're not going to get help. So I've really learned, especially over the past two years, of all the craziness of running a business during the Pandemic, which is its own other podcast topic, but just the value of being able to approach someone and say, this is where I matter, this is where I'm struggling, can you help me? Has made a total difference. I'm shocked. People want to help when you ask for help. So the last thing that I'd say I wish I had learned earlier on as an entrepreneur is the value of having difficult conversations and running at the difficult conversation instead of away from them. Inevitably, there are these uncomfortable things, right? Someone might have been a good fit for the organization at one point, but is no longer a good fit. And we have this kind of perceived sense of, well, I want to do right by them, instead of having the conversation of like, hey, the needs of the organization are changing. Where are you at? What should we do about this? How do we arrive at a conclusion here? I've really had to learn a lot of that in the last twelve months. So those are three things for me, right? Learn to say no, be vulnerable, and run towards the difficult conversations, because you can have a difficult conversation and still be a kind, nice person who does right by other people.

Jeffrey Stern [00:46:47]:

Those all resonate quite a lot.

Cal Al-Dhubaib [00:46:51]:

Awesome. I'm glad to hear it.

Jeffrey Stern [00:46:53]:

So the closing question for everyone on the show so far is painting a collective collage here of not necessarily people's favorite things in Cleveland, but of things that other people may not know about. And so with that, I will ask you for what are your favorite hidden gems in Cleveland?

Cal Al-Dhubaib [00:47:13]:

So, I mean, I've talked about this already, but I'll say it again, the Cleveland Museum of Art. I'm shocked. I'm floored by the number of people who live here, and they're like, yeah, I was there ten years ago once. It is such an amazing international gem. Beautiful setting. Can't say enough good things about it. If you haven't gone, you got to go. But that aside, one of my favorite things just to enjoy is our park system. We have one of the best park systems in the country. There's a lot of investment going into it. Got great trails if you're a runner and if you like. The waterfront. Edgewater park is pretty special. So I love enjoying the nature that we have around us. Land. The Cleveland Museum of Art. I mean, two things that just never get old.

Jeffrey Stern [00:48:00]:

They really don't. Well, Kyle, thank you so much for coming on and sharing your story and the work you're doing at Penndata. That's very exciting.

Cal Al-Dhubaib [00:48:09]:

I so appreciate the time and I hope everyone out there enjoys the lessons learned and checks out some of the fun things Cleveland has to offer. I'm really excited for the budding entrepreneurship scene here.

Jeffrey Stern [00:48:22]:

Me as well. If folks have anything they'd like to follow up with you about, what is the best way for them to do.

Cal Al-Dhubaib [00:48:28]:

So, you can find me on LinkedIn. My last name is Al Jubabe, spelled the traditional way, but if you look up talent and data, you will find me. So yeah, please connect with me on LinkedIn. I'd love to hear what you're working on.

Jeffrey Stern [00:48:42]:

Awesome. Well, thank you again.

Jeffrey Stern [00:48:44]:

Really appreciate it.

Cal Al-Dhubaib [00:48:45]:

Yeah, my pleasure.

Jeffrey Stern [00:48:47]:

That's all for this week. Thank you for listening. We'd love to hear your thoughts on today's show, so if you have any feedback, please send over an email to Jeffrey at www.layoftheland.fm or find us on Twitter at @podlayoftheland or @sternjefe, J-E-F-E. If you or someone you know would make a good guest for our show, please reach out as well and let us know. And if you enjoy the podcast, please subscribe and leave a review on itunes or on your preferred podcast player. Your support goes a long way to help us spread the word and continue to bring the Cleveland founders and builders we love having on the show. We'll be back here next week at the same time to map more of the land.