Phil Howard: All right. We are live today, everybody. It’s a very special day. We’re talking with Jaron Jones. We want to talk about AI, business adoption, and best practices. 1What you think is the current landscape in the marketplace, because there are some things that are going wrong. 2And I think there’s a lot of places to make a lot of bad decisions. 3But I’d like to hear from you what you think the current AI landscape looks like, what the reality is, and where we’re making a lot of mistakes or where we’re making good decisions. 4
Jaron Jones: Yeah, I think two really important things I’d like to touch on before we get started on that, but good context. 5One, MIT recently put out a study on companies adopting Agentic AI, and ninety-five percent of companies, it’s either not working or they’ve basically scrapped the project. 6There’s a little more context, so I’d suggest reading it, as with any paper. 7But obviously, a lot of businesses are struggling with it and the landscape. 8I think a lot of businesses obviously have problems and things they would like to automate, make less expensive, so on and so forth. 9And maybe those things have been sitting on a shelf for a long time. 10Maybe you have ten or twelve really important enterprise workflows in your company. 11And maybe they all exist at certain levels of automation and abstraction. 12Right now with AI, somebody’s basically coming into every company, multiple people, salespeople, people from other orgs, so on and so forth and saying, hey, these ten or twelve problems, AI can totally solve those problems. 13 And maybe for one or two of them, it can, for all twelve. It probably can’t. 14Maybe there’s a specific company that can, but probably can’t. 15 And, it’s a landscape of promises, above anything else. And I think that can be problematic. 16
The first place I start personally, and I think would be a good spot for us to start, is really trying to approach everything still as an automation problem and trying to look at those nodes. 17Maybe you have something that needs to call workers in your company, and maybe they provide responses like one, two, three, they get a digital input and it goes back to the database, and then some other workflows happen, and somebody could come into your company and say, hey, AI can do all this for you. 18 And you know, well, calling services already exist. They do it really well. 19 They do it almost one hundred percent of the time. And automation scripts already exist. 20Probably not a good work example, but you can have another problem where you receive input from a customer. 21 It’s text recognition. And you need to say, hey, we need to be able to tell how upset these people are. 22So maybe we can provide them with a different level of support. 23 whether it be technical support, sales, something like that. And, you can have a person go through all of those. 24You could try and use a service where you could just say, hey, it’s quite good at that, but just trying to, I don’t know if anyone’s ever played with Lego robotics or something where you have five different things that are reliant on each other, really trying to draw all of those things out and look at what AI is actually replacing. 25And then you look at the specific node or the specific part of the problem you’ve identified AI can replace, then actually start doing synthesis, statistical tests, or some modeling to see how well it can actually replace that specific thing. 26
Phil Howard: Because let’s go back to that Lego model, because I do have a Lego robotics set and my kids play with it. 27So let’s just go back to that metaphor a little bit and maybe break that down a little bit and compare it to any make-believe manufacturing company maybe. 28
Jaron Jones: Yeah. I mean really it’s just breaking it down because the salesperson is going to come in and say, hey, I can take care of this entire problem for you. 29
Phil Howard: And what do you think is the main problem that people are coming in and saying that AI is going to fix right now? 30 Main problem the AI is going to fix. I mean, what is it? To me, I’m just wondering what salespeople are coming in and selling with AI other than top-level MLM type stuff. 31
Jaron Jones: Yeah, I think agents and support, I think trying to specifically replace workflows. 32So somebody might come in and say, hey, you can replace some of your I.T. support with AI. 33And I’ve seen some of our customers attempt to do this to different levels of results. 34And they say, hey, AI is going to identify what things it can take care of and what things it needs to send to a person. 35 Maybe. And really, that’s trying to replace somebody who exists at the top of a call queue, who’s going to do the routing for a company for a call center metaphor. 36 And it’s a node. And the problem is you get a call from a customer, somebody has to identify or assess that call. 37And then it goes through different levels of triage and support. 38And they’re saying, hey, AI is going to replace this top level of triage. 39And it might also replace some of the lower levels of support or things that it can take care of. 40 I would say that’d be a big one. I think people were definitely focusing on agentic stuff, but it depends on the provider. 41Like if you have a company that’s scanning emails for vulnerabilities that’s not agentic, but those companies are saying that you’re using AI, probably utilizing LLMs and deep learning at this point in time and trying to assess whether an email is vulnerable or not. 42 So it’s hard. I want to say agents, and I am rambling a little, but I think it is very specific in the space that you’re talking about. 43
Phil Howard: Let me just make sure I hear you correctly. The main point is, for the sake of technology adoption, we should be focusing on automation first. 44 And how does AI assist in that automation?
Jaron Jones: Yeah, just looking at a problem, contextualizing the problem, trying to look at each specific part of the problem and how decoupled and coupled it is from others, just really diagramming it and trying to be more explicit about what AI is actually replacing. 45Because I think the conversation, if you were to go to a conference and talk to someone, is “AI is going to replace this part of my business.” 46It’s like, well, it’s not actually going to replace that entire part of your business. 47 It’s going to replace a part of that problem. So what part of that problem is it going to replace? And start talking about it at that level, because you’re not going to have a useful conversation if you’re talking about AI replacing entire arms or business units. 48And I almost feel talking to some people that is where the conversation is going in maybe ten or twenty years, but at this point in time, each service has its strengths and weaknesses, but contextualizing and identifying. 49
And then I guess when you get down to that node and as we talked about previously, let’s say you look at that call center triage problem of calls coming in. Instead of having a human, you have an AI agent trying to replace those calls. 50 I really like confusion matrix, if you don’t know what that is. I recommend googling it. 51It’s probably the number one tool that a data scientist could give a stakeholder as far as evaluating how accurate the model is. 52Because at the end of the day, you want to have a conversation on how many true positives, how many false positives, how many true negatives, and how many false negatives. 53 How accurate is this tool? And those numbers are great and abstracted. 54But then you could take your human, who’s currently doing all those calls and routing and say the same thing, evaluate him against this. 55And you could say, hey, this AI tool, we demoed it for this company for two weeks. 56 We had way more false positives. That’s going to be a problem for our business because we have this SLA with these five customers. 57 And maybe you can even go back to the AI company and say, hey, is there a way we can only use it for these customers where it wouldn’t interfere with the SLA, with all these false positives, as opposed to these other ones? But really just trying to get a more rigid framework. 58And to be honest, it’s really good, especially working with vendors, because if they’re not willing to give you a confusion matrix, it’s probably not a vendor you want to be working with, because every data scientist in that company knows how to pull one. 59And if they’re not willing to give it to you, that’s basically them not participating in due diligence from an AI perspective. 60
Phil Howard: So let’s go back to the beginning, where some of the companies—there’s a great article on MIT on why companies are pulling out of whatever AI adoption, whatever area of AI that they adopted that failed. 61 Right. So First of all, do you have any examples that you can think of off the top of your head? 62
Jaron Jones: Yeah, I have one company we work with in particular, and I expect that their AI probably won’t be there a whole lot longer. 63It’s similar to the example I’ve been describing. 64They’re trying to automate a certain part of their tech support process. 65 Specifically, the AI is doing the routing for them. I think it attempts to solve some of the tickets. 66
Phil Howard: Okay. So more of like a should we open up a ticket? Should we close a ticket? 67 Should we escalate a ticket, that type of thing?
Jaron Jones: Yeah. 68And I think the problem this company is having specifically is, it’s not working as well as they’d like to in their company in the IT space that relies on high SLAs. 69 They rely on things working most of the time. And their customers are saying, maybe humans were taking care of this ninety-five percent of the time. 70And the customer satisfaction they were getting back from a human was, let’s say, seven out of ten on average. 71And the AI is just probably going to be returning lower results. 72
Phil Howard: So now let’s layer in the confusion matrix from the data scientist standpoint. 73And to break it down from a very “keep it simple, stupid” from a top-down C-level business executive trying to make a decision. 74You come to the table and say, “Hey, let’s take a look at this confusion matrix.” 75First of all, define what it is, the matrix. 76 And how are we now going to decide go or no go on this? Pull this AI. It sucks. 77 Get rid of it. It’s killing us or it’s okay. Or the return on investment is good. 78I’m assuming the confusion matrix is measuring all of these things. 79
Jaron Jones: I mean, it’s measuring true positives, false positives, true negatives, and false negatives. That’s important context. 80And that ninety-five percent failure, the reason a lot of projects fail is because the projects should not have been started in the first place. 81 So we’ll start there at the confusion matrix. Okay. 82If I’m saying this company, ninety percent of our tickets were being handled correctly by a human and eighty percent of them, or maybe seventy or sixty or fifty, are being handled correctly by AI. 83If they had done that R&D and just given a confusion matrix to management and said, “Hey, these are our current numbers. 84 Ninety percent of our calls are handled accurately, ten percent are inaccurate. And the AI is only doing it—maybe it’s eighty-eight, maybe it’s really close, but maybe the instance in which it’s wrong, it’s having a lot more true positives as opposed to false negatives or something.” 85 They could say, don’t spend time building this. We can’t have this type of error where it tells the customer something worked and it didn’t. 86That’s really detrimental to our business, even though it’s right a similar amount of time. 87 It can’t be wrong in this specific way. That’s the difference really between a false positive and a false negative. 88 False positives are really destructive for businesses and trust. And you would have never pushed it out in the first place. 89
And the second part of your question, let’s say the model was live. 90 And let’s say it wasn’t working to the expectations. They hadn’t done this prototyping R&D evaluation prior. 91Really, you’re coming in and doing the same thing because you will be in a lot of business situations where there are people who just don’t like computers. 92I hate to boil it down to that, but that just happens sometimes, especially in a CIO or a technical role. 93And you can provide them something to say whether it’s working or not. 94And there are a lot of instances I’ve been in where somebody says, “This isn’t working.” 95And I can give them a confusion matrix and say, “Hey, my model is right ninety-four percent of the time. 96 Your technician was only right eighty-nine percent of the time. I get when the computer breaks, it’s frustrating for you guys. 97It’s a new type of something breaking, but this is still much more accurate than a person. 98There’s always going to be errors.” 99Because people, when something’s new, if they don’t like technology, they’re going to over-index on something breaking. 100If a human is wrong ten times out of ten thousand and a machine was wrong three times out of ten thousand, you can let somebody focus on those three times and say, “This is crap, pull it down, we’re getting rid of AI,” or you can provide them with the real deliverable and say, “I get three isn’t ideal. 101We all wish our products worked ten thousand out of ten thousand times. 102But, still better than a human, cheaper than a human.” So numbers don’t lie. People lie. 103
Phil Howard: And it’s a good way of looking at it. And I really like the matrix because it’s not just a yes-no. 104It does drill into the type in which the model didn’t work. 105
Jaron Jones: Because I’ve come back to it a few times. False positives are really destructive in tech where the machine says, “This is done correctly, this is recorded correctly, whatever.” 106 And then it closes the ticket. But the person wasn’t taken care of. 107AI models have a habit of having a lot more false positives than human models. 108And those are the ones that really deteriorate business trust, because somebody’s like, “I hate this new thing. 109It closed my ticket, but it’s not taken care of.” 110Usually, models and humans are pretty similar in the amount of false negatives they have where they think something’s not an issue. 111Humans mess up, but humans usually don’t say something’s done when it’s not done. 112 They’re definitely better about that. And that’s why I like the matrix. It’s not yes-no. 113It’s evaluating the specific way it’s failing. 114And if you’re a business executive and you understand your business context, you can understand how destructive that type of failure could be, or you could understand maybe this isn’t a big deal. 115Maybe the human agent was correct ninety percent of the time, but the computer agent is correct eighty percent of the time, even though it has four times as many true positives. 116 This isn’t a workflow that really matters. It’s way cheaper, it’s faster. 117 We can deal with it because it isn’t always about better. It isn’t always about proving the model’s better. 118 Your confusion matrix could say your new automation is slightly worse. All the numbers could be slightly worse. 119But you have other things to contextualize like price, SLA, things like that. 120If you go to your CEO and you can say, “This is eleven percent worse in this way, five percent worse in this way, three percent worse in this way. 121But it’s four times cheaper.” And sometimes that’s acceptable. Sometimes it’s not. 122But it’s good to actually make the decision with all those metrics in front of you. 123It’s not just good to say vaguely, “This is cheaper and better because it’s AI.” 124
Phil Howard: So I really like this because it gives, first of all, a very technical approach to decision-making, but it also is providing a way to translate it into business language that makes sense for a C-suite to make a decision on. 125So you’re saying first, use the confusion matrix to decide A, is this better than a human or is it not? 126 Or what are the numbers basically? What are the numbers first? Is it even plausible? 127 Is it even worth doing? If it is worth doing, okay. How much does it cost? 128 Exactly. And how many hours is it going to take to deploy? And then there’s implementation. 129Then there’s—I like this idea of how to not… 130this is what I wrote down: “how to never start a project that shouldn’t have been started ever again in your life.” 131But that applies to so many other things in life, and there are so many variables to that. 132 You have number one, this confusion matrix thing. But how would you even apply that to, say, vendors that are knocking on your door to begin with? 133
Jaron Jones: I just worked with a vendor who’s providing AI technology for our business right now, and I’m not going to get too into specifics of what they’re offering. 134But the first thing I asked them was for a confusion matrix, and they gave me one. 135
Phil Howard: Did he say, “Hold on, what the heck is that? Let me go back and get you one”? 136 Or did he say, “Oh yeah, it’s right here”?
Jaron Jones: There was a technical person on the call. 137 Some sales engineers are really technical. Some are. I think the sales engineer is the best job in the world because you don’t ever have to actually be responsible for any one project. 138 You actually get to have a lot of projects. But at the end of the day, you’re not ultimately the guy implementing it. 139 So it’s really cool. You get to talk technical all day and do all the fun work, and you don’t have to do all of the selling because the sales guy does all that part. 140 You just get to talk technical all day. It’s actually a pretty sweet job. 141Yeah, but I think they gave us sixty percent as a number for the true positives. 142Like the model’s wrong more or less thirty to forty percent of the time, depending on how you want to look at the true negatives. 143But for this business use case and what it was going to be automating and what it’s going to be taking off the table for us, that was acceptable. 144 And I think that helped us as well, because it helped establish a baseline and a metric and a target upfront that this product is not going to work all the time, but it’s going to work more than half the time, and it’s going to remove a lot of…
Phil Howard: Just out of curiosity, were the things that it was wrong on… 145 are these weird outlier type things?
Jaron Jones: Using vision recognition. 146And actually, I think sixty percent is an acceptable benchmark in that space right now. 147I wouldn’t say they’re outliers, but I would say vision recognition is different and it’s trying to categorize things. 148And I think certain things it’s very good at categorizing, but certain things it’s consistently not good. 149So I wouldn’t say it’s an outlier, but I would say there are things that it’s consistently not good at categorizing. 150A really good example is if you had a vision model that was supposed to categorize cats, and you had a bunch of dogs run through—every dog, dog, dog, dog, dog. 151If somebody had a breed of dog that was black and nine pounds or ten pounds and it ran across, it would show up as cat. 152 Yeah, it would pretty consistently show up as Cat. And I see some of that in this, like it’s failing a lot of the same problems that I know why it’s failing them, because it looks like things that it shouldn’t be failing, but it’s trying to categorize it and it’s falling in the wrong bucket. 153
Phil Howard: Can it learn? So it’s supposed to be learning. Can it learn? 154 Can we implement like, “Hey, by the way, that was a cat”?
Jaron Jones: I think it has reinforcement learning. 155This customer, they have an option in the product where you can flag that the prediction is inaccurate or accurate. 156And my hope, and how I would design a product, is that flag would go back to the model training. 157And it would go back to when you want to make a model, you have to categorize or classify your training data. 158I would hope that us flagging it would go back and reclassify that training data. 159So let’s say a yield sign showed up and the model said it was a stop sign. 160I would hope that it would go back and fix that label in the training data, but I don’t… it might just be there to make us happy. 161 I don’t know their internals.
Phil Howard: So the sales engineer told you literally right to your face, it’s wrong thirty to forty percent of the time. 162
Jaron Jones: Yeah, I think honestly for vision recognition right now, I think there are definitely some people who would want to come and flex and give higher numbers. 163But if people are going to be realistic about what they’re seeing in businesses right now, I think that’s for situations where you have a lot going on and you don’t have clear colors or barriers or dividers between your target and some of the stuff behind it or around it. 164From what I’ve seen, like Kaggle and other data science competitions, what we’re getting is an acceptable number. 165And I was happy that they were honest and giving that to me. 166And, you know, that also shows me that it’s not BS. 167And if I wanted to test whether it’s BS too, I could just get someone in tier one or tier two and say, “Hey, we’re going to demo this for two weeks.” 168I could pull down two hundred videos and I could have him manually do the matrix for me. 169And I do that a lot because I just don’t trust the vendor and say, out of these two hundred, it categorized all two hundred of these. 170Can you give me true positives and ones that were right, true negatives, the ones that were obviously wrong? 171 You can just have a human do that and test these vendors too. And I would advise most people, it’s great that I believe this vendor gave us a good number. 172 But also, a confusion matrix is four numbers. I could ask ChatGPT what good numbers were and I could give it to you right now. 173
Phil Howard: So trust but verify. 174 Yes. So, number one, ask your AI vendor for a confusion matrix if it’s a specific use case. What do you say about just end users using LLMs in general and all of that? 175
Jaron Jones: What do I say to it?
Phil Howard: Yeah. I mean, what do you think about leaking data, company data, company sensitive data, all this type of stuff? 176
Jaron Jones: It’s actually an interesting topic because there are definitely a few approaches to it right now. 177 One is you can just block a lot of that. I mean, they have IPs, they’re vetted companies. 178You can just go in from a security perspective and tell nobody in your org they’re going to use it. 179And I have friends who work in finance and government. 180 And that’s the case for them right now. Just don’t use any LLM at all. 181 Just don’t. Okay. Or I think some of those companies are, like OpenAI and Gemini and stuff, are developing internal contracts with these companies where they have agreements on what they do and don’t do. 182
The other end of that is there’s some browser extensions that are coming out. 183It’s a tricky space because I think they do what they do well. 184 I’ve seen some of the source code for some of the startups. I don’t think it’s a terribly complicated problem. 185 Same thing. It’s text recognition. You’re going to have false negatives, false positives. There’s an error rate there. 186Even if the extension works ninety-nine percent of the time, some small amount of data is going to get through. 187 And some of these companies are new enough. If you’re a company, you know who to talk to in due diligence questionnaires from your vendors and stuff like that. 188Some of these browser extension companies, I don’t think they’re at the point where they can work with enterprise-level companies for that type of stuff. 189 They’re definitely more in startup mode. I think that’s probably going to change rapidly, to be honest. 190I think there’s a need, and I think that you’re asking me about it because there’s a need for it. 191But I think that’s where most of the space is right now: either ban it, work with the company and get an internal model that’s safe, or try these browser extensions and find a relatively vetted company if you can. 192
Phil Howard: Or there’s the extreme example of you’re fired if you don’t use it. 193
Jaron Jones: Well, I’m talking about using it securely as opposed to using it in adoption. 194I think it’s like a Ferrari in that if, let’s say everyone in your company is driving a Corolla and they have relatively the same tools and everyone can drive a Corolla. 195 It’s easy to shift. You can get in and turn it on. 196If you rent a Corolla, everyone can probably figure out how to drive a Corolla. 197Some people, you give them the Ferrari and they know how to drive. 198They’re really, really good, like developers or data analysts or things like that. 199And they’re using it to offload some of their work and automate. 200 They’re using it as a second source of truth. I like using it for pull requests. 201 Sometimes I just copy and paste the thing. I review it myself for a minute or two, and then I compare notes. 202 I really like it for that.
Phil Howard: Walk me through that, because that sounds like… how do you do that? 203Control A, Control C, Control V. And again, you’re talking to a very simpleton from a data scientist perspective and math perspective. 204You’re using things in a way that the normal human doesn’t normally do. 205 So everyone else is probably laughing at me right now. And they’re like, “Phil, why do you ask that?” 206 But what are some of the things that you use AI? How do you use it? 207 Are there any tricks of the trade?
Jaron Jones: I guess problems that don’t require context, it’s really good. 208 I would say that’s the biggest one for me. I guess to go back to the Ferrari example, I’d like to finish that because I really like it. 209 A lot of the really strong developers, they just use it to offload work. They understand the context windows. 210So to answer your second question there, I think it’s really good for problems that don’t require as much context. 211So if I had a data analysis problem or a script that was confined to one Python file, or maybe the libraries or dependencies are all in one pretty tight repo, it’s pretty easy to get it in there. 212 It does a good job of evaluating everything. Everything’s within bounds. Things that it struggles with, I do a decent amount of work in the database with ETL pipelines and things like that. 213That can be harder for it because it requires so much context for it to understand all of your company’s ETL pipelines, what tables have what indexes, what constraints. 214 It’s definitely harder. So problems with less dependencies. Really, I’m saying context, but I think what I’m saying quite literally is dependencies. 215 I would say it’s a big one. And the developers I see who struggle with it are the ones who don’t understand everything. 216There’s a lot of out of ChatGPT and put it in, and they’re like someone who can’t drive a Ferrari. 217 They’re better off in a Corolla. They’re probably going to crash. 218 I see a lot of developers who aren’t as strong. I think the LLMs are making them a worse developer. 219I fear for a lot of junior developers who lean too much on the LLM because I think they’re just in a Ferrari, and I think they’re crashing over and over and over again every day, and some of them will learn to drive the Ferrari and some of them won’t. 220 It’s definitely a deep-end-of-the-pool approach.
Phil Howard: That’s the glass half full, glass half empty. 221Is this going to be the dumbing down of society? 222
Jaron Jones: Some people will use it and they’ll use it as a second source of truth. 223 Some people use it to check their work. Some people… I work with a guy who has probably been writing a whole lot of T-SQL for two years. 224He learned so much from ChatGPT, and I know that there are not a lot of people in his department who are mentoring specifically because he’s a really curious guy, and when he doesn’t know why something’s doing what it does or how to improve something, he is a big what, where, when, why guy. 225 And those people, I think can benefit a lot from LLMs. The “just make it work” people, I fear for them. 226
Phil Howard: Curiosity is one of our values on the podcast. 227 If I was to ask every single person that was ever on the show, which is like three hundred and ninety now, “If you were interviewing someone, you’re looking to bring someone on your team, what is one of the most important character sets or values that they would have?” They say curiosity is big. 228
Jaron Jones: Do you want my number one? I’ve had a lot of time to think about this, prepping students at the university where I teach. 229And then I like to be involved in interviews here because I think staffing is really important for culture building. 230 Asking people questions they don’t know. And specifically, I like to look at someone’s resume and find a topic that they’re relatively knowledgeable on, but give them something a little over their head. 231And some people have a tendency to reach and just talk a lot, and this and this and this, and they’re like, “Oh, it’s on the tip of my tongue.” 232 It’s on the tip of your tongue for a reason. I know that you’re close to knowing this, but I know that you don’t know it. 233And some people are really good at saying, “I don’t know.” 234 “Hey, I know this part of the problem, but I don’t know anything beyond that.” And just very literal. 235Much better at contextualizing where they are and where they aren’t. 236
Phil Howard: You’re saying that’s a demonstration of integrity and character?
Jaron Jones: A demonstration of a lot of things. 237A lot of it’s a demonstration of how quickly they’re going to be able to handle learning a lot of new things in a workplace. 238
Phil Howard: What’s your follow-up question? So what would you do to learn it? 239 What’s your follow-up question?
Jaron Jones: I’m definitely a go-with-the-flow person. 240I think people usually give you enough to come up with a good follow-up question. 241If I had to come up with something general to do after that, if somebody was rambling for a while, I’m probably not going to ask them a follow-up question. 242 I just don’t think those are useful conversations. A lot of time, if somebody said, “I know this part of the problem, but I don’t know this part of the problem,” 243I start trying to drill into where this person is at. 244I like to ask people, “If I paid you for two weeks to learn more about anything, what would you learn about?” 245Then, ask, “Why would you learn about that?” and try and just find where their head’s at. 246
And I think this ties into AI and why there’s tough decision-making in AI right now. 247Every ten years that goes by, there are more layers of abstraction in technology. 248 So AI is a new one. So you have AI, programming, data, and then executive. 249I think that’s why it’s really hard right now because it used to just be programming, data, executive. 250And there’s this new layer on top of everything. 251And the same thing with a developer trying to figure out, or an analyst or whatever, are these big-picture people or these people that like to focus more on solutions architecture? 252Are these nitty-gritty people that want to get in and grind binary trees or do Rust performance optimizations and stuff like that? 253And knowing that can definitely help make sure that they end up being hired to solve the right problems in your company. 254Because if you get somebody who wants to do Rust optimization, for example, and you hire them and you say, “Hey, can you clean up our Azure environment and clean up our DevOps and this and this?” 255 Probably not a good fit. So just trying to figure out where people are at, what problems they like to solve, things like that. 256
Phil Howard: It blows me away how far we’ve come in technology and how many aspects and different, just like what you just said, categories and subcategories of jobs and things that we would even need to hire someone for. 257
Jaron Jones: Yeah, I always really like the term “layers of abstraction” because I think it really helps contextualize, it’s okay, you don’t know this thing. 258 It’s three layers from what your job is right now. If you’re an executive, how far is AI actually from your job? 259 Like there’s the data part, there’s the programming part, there’s the AI part. There’s a lot going on there. 260 And there’s more going on than there was ten, twenty years ago. Because AI usually involves an app. 261So you’re expected to understand the data, the AI, and the app. 262Ten years ago, you had to understand the data, your business problem, and the app. 263 There’s just more to worry about. And if you don’t understand all those things at a relative level, you’re probably just going to make more mistakes. 264Or you can find people you trust who do understand those things and get help. 265And that’s the best advice I would give anyone is if you’re not that type of person, it’s not going to get less complex. 266 Just separation of concerns. Find somebody who does like working on those problems because AI is just so rooted in math. 267And I got into data science because I liked math. 268There are things I don’t like working with, and I’ll do them if I’m asked to do them. 269
Phil Howard: What do you hate working with? I gotta know.
Jaron Jones: What I hate working on? Creative writing. 270 I’m a sicko. As long as something is challenging, I like working on it. 271 I really like puzzles. I really like intricate, complicated board games and stuff like that. 272
Phil Howard: Isn’t that like non-human? Isn’t that the whole purpose of AI anyways?
Jaron Jones: It is. It is. 273
Phil Howard: Like that one company that had two thousand people in India doing that one automated… call it AI. 274
Jaron Jones: Yeah. That was because it was cheaper. It made more sense. That was a business case. 275
Phil Howard: There’s so many things I want to ask you now. 276 First of all, the asking people questions they don’t know is awesome. How would you know that they didn’t know something, by the way? 277
Jaron Jones: Let’s say I get a resume and on their resume, they say that they are relatively good at JavaScript, C#, and SQL. 278And I’ll be like, “Oh, I actually know quite a bit of SQL,” and I can tell by the things they listed roughly where they’re at. 279And then I can ask my feeler questions and I can say, “Hey, can you explain the different types of joins to me?” 280And based on how they do on that, I know what the next question needs to be. 281And then once I start to feel a little struggle, I’ll try and go one step above and see what happens. 282
Phil Howard: I think that’s good too. This is going to sound weird, but I think it’s good to see people in a point where they’re a little stressed or challenged because you got to interview someone you’re going to work with on good days and bad days. 283And if you have a coworker who has a problem they can’t solve, some people are good to work with, some people aren’t. 284If you can test that a little bit in the interview, that’s always a nice thing. 285I know it sounds funny, but I interviewed this guy years ago when I was a store manager at Starbucks. 286This is like twenty-five years ago, and we were a very, very busy store. 287Sometimes easily like seven hundred people in and out the door a day, line out the door, crazy busy. 288 And it used to get really, really stressful. And if the deployment scenario of people on the cash register and people making coffee, if that was off at all, it would completely… everything would fall apart and people would get really stressed out. 289 So we did behavioral interviewing and I asked this guy that had he had just finished his second tour in the Marines. And I was just asking one of those questions like, “Please tell me a story about a time you were under pressure and why was it so… whatever. 290And can you just describe everything that was going on and how you dealt with it?” 291And he was like, “Yeah, I was on the front line in the invasion of Baghdad, and I was driving the Humvee. 292My tire blew out and I had to change a tire with all of us with night vision goggles on and missiles and bullets, you know…” Dude, I was like, “You can handle coffee.” 293I don’t think I ever saw him not in this really weird, calm mode, even under the most stressful situations of a stupid coffee shop. 294
I want to ask you, what is the end game for people in IT? I’m just curious. 295 What’s life all about? Or what’s the end game for someone like you? 296I’m just curious because I ask everybody and most people don’t have an answer for it. 297 Is it like, “Oh, I’m just going to cash out a 401k?” Do you have any big dreams? 298
Jaron Jones: I want to keep solving hard problems. 299My wife has probably six or seven years of school left, and I think that starting a family while she’s doing seventy hours in medical school is going to be tough. 300So I just want to focus on solving hard problems, working with good people for that period of time. 301And then as we get to the point where we can start a family, I think we both have to evaluate. 302And I think it’s important to say now that one of the best lessons I’ve learned is don’t make decisions for yourself in the future. 303It’s easy for me to say now, I know where I’m going to be in seven years and I don’t want to start a family, and maybe I’ll want to do this, this and this for work, I don’t know. 304I know what I want to do for the next five years. 305I know what I probably want to do until she’s out of school, but I’m not going to make decisions for future me right now. 306
Phil Howard: Yeah, no one ever knows where they’re going to end up. 307What are some of the smartest minds out there right now? 308 Just some of the smartest ones? Because I have friends that are PhD data scientist guys that went in and sat in on interviews at Facebook years ago and they put them in a room for like twenty-four hours and they’re like, “All right, here’s a problem. 309How do you solve it?” And then they’d come in and see what his solution was and they’re like, “Okay, not good enough. 310You don’t get the job.” Someone else does. I mean, these are some pretty smart people. 311 Do you have any finger on that?
Jaron Jones: Yeah. Occupationally. 312 Because I think that’s the question. I keep going back to solving problems. 313I think people who are solving hard problems… I think a really good example and a really eye-opening thing for me is, I have a long-time friend who lives in New York right now, and he has a partner who works at a trading desk for a pretty prominent bank. 314 And they didn’t study business finance. They studied physics at Harvard. And you kind of touched on that. 315And I went for a run—I run or bike every day. 316 And that gives me time to think about things. And I was like, “What does that really mean?” 317Why did this company that’s probably much better at assessing talent than you or I or probably anyone who is going to listen to this… 318 why are they identifying these people and going after these people who don’t have an occupational match to this job? And someone who studies physics in undergrad at a good university is probably better at solving hard problems than most of us. 319 And I think that’s what it comes down to is, can you solve really hard problems? And then do you hopefully have some tools to solve those problems in the modern age? 320 And a lot of that is math. I mean, data science and physics have a lot of similarities there, in that you have a lot of math and you can get in. You can do ETL work, you can do programming work, you can do data science, you can do machine learning, AI, whatever. 321 As long as you have that background in math. If you don’t have that background in math, you’re hoping that you memorize keywords and understand some technologies. 322 But under the hood, ML and AI is statistical in nature. Data science really as a discipline is very, very, very statistical. 323And if you don’t have that backbone, it’s like AI is a car and all that stuff is the engine. 324And maybe you can learn a lot about the tires and the paint job and all this other stuff, but the engine, to really understand that, it just requires a certain level of math and statistics. 325And I think the smartest people are the people that are working on those problems. 326And if I was someone, if I went back to college right now and my goal was to future-proof myself, I would study math or physics. 327
Phil Howard: My dad was a math major, went to med school. 328
Jaron Jones: I have actually heard that it’s a very good idea, and I’m actually seeing it now with my wife because you have to relearn biochem and micro and all that other stuff in med school. 329You have to relearn everything, so why not learn something cool and interesting in undergrad and just give yourself more context? 330
Phil Howard: Yeah he did. He was a math major. He was going to some kind of engineering. 331And then he was like, “I can’t do that. I’m just not excited about that.” 332So he actually got into medical school his third year of Bowdoin College, and then he didn’t even graduate college. 333 He just went straight to med school. They actually took him into medical school before he graduated. 334So he was a guy that went to med school without a college degree. 335 So what do us other laymen do that are not math geniuses? What’s the second smartest people in the world? What should we study? 336
Jaron Jones: I would say get the baseline so that you have the tools for dialogue. 337 So, let’s say you’re a sales engineer. You want it to work on software. That would probably teach you that you need to learn about APIs, this, this, this, and this to talk to these people, memorize a thousand acronyms, and know what an IP address is. 338 Just don’t touch the router. Data science is the same thing. If you really want to be able, if you’re working with a vendor or a startup or your own internal company and you want to suss out whether… maybe a CEO is listening to this and he wants to be able to suss out whether his CIO is bullshitting him or not, and he wants to learn some better language to do that. 339 We talked about confusion matrix. That’s a big one. 340 Confidence intervals. Be able to say you have a certain subset of data and then you have a certain amount of negatives and positives, can you say with what level of confidence whether you believe this thing or not? Because everything exists on a spectrum of zero to one hundred. 341What does that mean when you’re working in the research world? 342You know, someone can tell you, “This is ninety-nine, this is ninety-five, this is ninety.” 343Being able to contextualize that is incredibly important because you’re never one hundred percent sure, even when they’re working on vaccine research or stuff like that. 344
Phil Howard: You mentioned the vaccine number. Well, even those ninety-nine percent numbers, those can be played with. The numbers can even be played with as well. 345 It’s like an SLA. It’s like you have a 99.9999% SLA. 346 What does that really mean?
Jaron Jones: Yeah, it means hours of downtime over thousands of customers over a year, over a massive geographical area where it used to be. 347
Phil Howard: Microsoft in that report, the adage that doesn’t reflect your SLA. You know, who cares? It doesn’t mean anything. 348
Jaron Jones: Other stuff is controlled. I actually have some publications in that space and my wife does too. 349 And generally, stuff there is controlled and vetted within reason. I’m not going to get too into that. 350But there are some fields of science and research where you can just put a confidence interval on a paper, and somebody’s probably not going to try and recreate the study or experiment. 351Generally, when you’re in the space where something’s going to go into human beings, somebody is recreating that and validating it. 352 So there’s some assurance there.
Phil Howard: Last question. You mentioned last time, you just had some good suggestions on, “Hey, go take a Udemy course or something like that.” 353If you’re an IT director, a mid-market IT director, whatever, Fortune 5000, you might not have a certain level of knowledge to be able to break things down. 354 What would your suggestions be?
Jaron Jones: My biggest suggestion, I actually just came up with it from our last ten minutes of discussion, but I feel very good about this. 355
Phil Howard: How good do you feel about it? Are you like ninety-five percent good? 356
Jaron Jones: About ninety-nine to one hundred. I feel very good about it. 357 Okay. Find courses online that are just “Introduction to Math for Data Science” or “Introduction to Data Science.” 358 And you don’t have to go deep. Just go maybe two to five hours in. Learn enough jargon, learn the baseline. 359I did a Coursera course years ago when I was getting into it. 360And the first unit of the course covered all the key stuff, and then the rest of the units got into it deep. 361You don’t need to get in deep if you’re not a data scientist, but you want to learn the key stuff. 362And the piece of advice I want to give very explicitly is find stuff online for data science, not AI. 363I think a lot of the AI stuff out there is really convoluted. 364I think it’s really product-focused on what version of ChatGPT is better than the other one. 365
Phil Howard: Minutia stuff.
Jaron Jones: Yeah. Find something that says “data science” and not “AI” because it’s probably going to focus on the bones, the language, and the communication more because it’s less centered in hype and it’s more centered around the discipline in which all this is built. 366 To be frank.
Phil Howard: It sounds like if you weren’t too busy, you could write that course yourself. 367
Jaron Jones: I think there’s a lot of really good stuff out there, to be honest. 368Andrew Ng, I did his courses years ago. 369 I think he was mostly on Coursera at the time. His stuff was great back then. 370I was making a decision around 2019 or 2020 after we moved, whether I wanted to do data science or quantum computing. 371And his courses and seeing some of the problems that he could solve, and getting to actually work with the math and having the calculus and stuff like that deeper in the course, which, like I said, you don’t have to get into, that got me really excited. 372 And that’s one of the reasons I got into data science. I know he’s still making stuff. I see him on my LinkedIn sometimes. 373
Phil Howard: I’m giving you the last word. 374 Final word? It’s been a pleasure having you on the show. 375What’s the most useful thing you’d have to say to all the IT leaders out there right now? 376
Jaron Jones: Focus on data. I think Models as a Service will become more popular, where you can take your data and plug it into a model. 377 I think there will be new AI technologies, this year, next year, five years from now, ten years from now. Getting something that can deal with all of your company’s data, whether it’s staff, changing policy procedures, stuff like that is just super important because there aren’t a lot of services that can clean up your stuff and get it ready for AI. And AI, machine learning, and deep learning, all these tools, don’t work well with dirty data. 378If you get good data, you could probably hire a consultant, have them come in for one hundred hours and build a great model for you because they’ll say, “This is good. 379 I don’t have to do any cleaning. This is ready for classification.” 380But it is the idea of… it’s almost like data is lumber. 381And today we’re building ships, and tomorrow we might be building houses. 382And then ten years from now, we might be building apartment buildings, but we’re always going to need lumber. 383 So focus on your lumber.
Phil Howard: Jaron. Very pleasure, man. Pleasure having you on the show. Thank you again. 384 Have a great day.
Jaron Jones: Yeah. You too. 385