Seasons of Change: Future-Proof Your Business with Hyperscience
Is your business ready for what comes next? Traditional automation tools have failed to deliver, but there’s a better way to drive measurable transformation. In this webinar, Chip VonBurg, Field CTO at Hyperscience, and Brian Weiss share:
- Insights into the trends and strategies shaping automation’s future.
- Examples of how industry leaders like Charles Schwab are overcoming today’s challenges.
- Actionable recommendations to help you align your team, accelerate outcomes, and reduce costs.
Exclusive Offer
Meet 1:1 with Chip VonBurg, Field CTO at Hyperscience
Discuss your automation strategy and walk away with actionable insights tailored to your organization.
Reserve your session now. Space is limited.
Brian Weiss: Good morning everybody, or afternoon, depending on where you are in the world. Thank you for joining us today. The title of the webinar that we’re gonna spend the next hour on is “Seasons of Change: How to Future Proof Your Business with Hyperscience.”
I am joined by my colleague, Mr. Chip VonBurg, and Chip VonBurg is the one in the light background. I chose the dark background. I got the dark side of the force in me this morning. Chip, watch out. It’s gonna be fun. But Chip has spent really, what, I don’t know, 12? It’s somewhere between 12 and 25 years. He’s gonna get me accurately here. In the IDP and automation industry, he has held multiple roles across companies. We’re gonna talk about not just the industry and sort of where it’s going, but also the technology that’s driving the edge of that. Chip, tell us, tell the audience a little bit about yourself.
Chip VonBurg: Well, thanks for the intro, Brian. My ying and yang brother, as he said, we’ve got opposite sides here today. So, I’ve been in this industry for a long time. I’ve been doing IDP and automation for about 25 years now. During that time, I’ve worked for several large vendors and a couple of partners out there. And, you know, really done a little bit of everything. I’ve been in just about every industry out there. I’ve sat on site with many, many, many customers doing all kinds of things. And, you know, in my last role, I was the CCO for one of the large vendors out there in the IDP space. I’ve been around the block for a little bit. I’m fairly new here in Hyperscience. I’ve been here about three months. But I do wanna throw a quick shout out to the Hyperscience folks for such a warm welcome and to the Hyperscience customers that I’ve met with so far. It’s been a great time so far and I’m looking forward to doing more of it. So let’s dive in.
Brian Weiss: Yeah, the great thing about working with Chip is he probably has forgotten more about IDP and automation than most people will ever learn in their lives. And maybe you’re happy about that, Chip. Like, you gotta forget some things. Look today…
Chip VonBurg: It’s all ingrained in there, Brian.
Brian Weiss: It’s all down there somewhere, right? Um, look, today we’re gonna sort of back and forth around key topics around the industry, technology, where it’s going, hopefully with the idea of leaving you with some key points to think about as you sort of work down your automation journey or even think about how to bring AI into that process. But we want to wake everybody up with a poll question just to get everybody awake, figure out where people are going.
Quick question for everybody: What is the biggest challenge your team faces with current automation solutions? And, um, I guess Chip, you can hum the music. And I’ll ask the question, is it Manual Workload? Is it Accuracy? Is it Limited Integration, Implementation and Maintenance Costs, or Compliance or Regulatory Concerns? Pick some, Chip, while people mull on this, ’cause I know we’re right out the gate and we’re asking them to participate, and they’re a little freaked out about it. What’s your favorite?
Chip VonBurg: I would lean towards Inconsistent Accuracy. And I think it’s because as people chase automation, sometimes they zero in on accuracy and, you know, we’re gonna talk about this a little bit later, but they zone in on accuracy instead of automation. But it’s a KPI, it’s a number that I think we see a lot of people using out there. Um, how about yourself? What are your thoughts here?
Brian Weiss: I gotta go with Compliance and Regulatory, because I can’t tell you the number of folks out there who really want to dive into AI, but are rightfully concerned about: Where’s my data going? How’s it being used? Am I a good steward to my customer? Am I compliant with everything I need to be? Uh, we got a few answers, Chip. Let’s see where people landed. Ah, okay. So Accuracy, I’m not the number one choice. Um, but definitely up there. Alright. High Manual Workload is the winner, though, by a few percentage points. Alright.
So, stay on your toes, everybody out there, we’re gonna ask you these questions along the way just to keep everybody moving along. We appreciate the feedback too, uh, as we do this. As we do this, look, I want just sort of to start it off… when I say I work in AI and we work in AI, people say, “Aren’t you taking people’s jobs?” and things like that. But really what we are focused on here is to bring AI to allow humans to do the human work, right? Not the mundane everyday. Like that’s what machines should do for us in automation. So it really is a noble cause. The other part of it is to make the work itself more human. If we look at what a mortgage application… that matters to somebody getting it faster, getting it more accurate. Or say for example, what we do at the VA where we take a veteran’s claim, a health claim, down from a period of three months was the waiting time prior to Hyperscience, down to under a week. That’s a big difference. There’s a real human story behind every document. Uh, and so I just wanted to kind of put that out there as we start out, Chip.
And let’s get into it. The key question I have for you right outta the gate: Hasn’t technology solved this already? Aren’t we kind of done? I mean, HAL 9000 here—I know that looks like to everybody like a Ring doorbell—but like, the machines are supposed to be able to do everything now, right? Isn’t that the case, Chip?
Chip VonBurg: It’s… I think it’s a big misconception. You know, so compute power has gotten better and faster. Uh, you know, the size of the computers have gotten smaller as they got faster, right? SSD cloud compute. And then of course, the big one, ChatGPT a couple of years ago, right? So surely all of the technology that you could ever need to solve these problems is there, right? You know, I think that’s the big misconception that people have. But the reality is it’s not, right? It’s not done.
And you and I continue to visit customers all the time where we see people still doing things manually is unfortunately just the reality of it, right? The problem is, frankly, these are hard problems to solve, right? And so because they’re hard problems to solve, people sort of kick the can right down the road on these, and they do the things that they can and they go, “Well, this one’s a little bit too hard for us to do. Let’s just continue to do something else to try to automate it as well as we can, or frankly, wait until we can automate it.”
Chip VonBurg: And so when you look at things like the document on the left on the screen, these are real-world problems that people have that are hard to solve, right? Handwriting, you know, nasty images, right? We all see the infamous fax line going through that document, right? Um, these are problems that make it tough to solve. ML solutions came out and held a lot of promise to solve these, and we’ll talk about that in a bit. Um, but frankly, as those solutions first came out, they took a lot of resources both on the systems, but also on the people to actually roll those things out. And it’s because of those sort of problems that you can still see things like a recent survey from McKinsey showed that companies are still spending $60 billion on manual data entry, right? So it’s definitely still out there and still happening.
Brian Weiss: Yeah. I mean, I’m honestly shocked—or maybe not so shocked—by the other stat that the BPO industry, which is “your mess for less,” is about $550 billion. And that’s a lot. That’s a testament to what the machines have failed at. It’s our inability to actually solve this problem. And I don’t know, Chip, I have yet to meet anybody that says, “My strategy going forward is to spend more with my BPO,” right? I mean, so the testament to our failure to get this right with technology is that spend, and I think it’s the bane of a lot of CIOs and CTOs existence, right? Is that having to make that tough choice?
Chip VonBurg: Agreed. Yeah. Very much agreed. So let’s look at a brief history of where we are and how we got here, right?
Chip VonBurg: So just a brief history of IDP. Kind of started on the left-hand side. These are the types of problems people are trying to solve. And then across the bottom, you see the different technology sets that people deployed. Um, I lived through this. I remember lots of little stories kind of through the years of challenges that were, you know, people were facing. And as I look at the attendee list, I know some of you lived through a few of these with me. So, uh, I appreciate the shared perspective on the history here.
But, uh, so if we look at the basic OCR, right? OCR is as old as the fifties. It’s been around for a while, but it’s really, it was like the late… or I should say it was like the nineties where people actually started to deploy it and use it for real-world kind of stuff. That led then into Forms Capture, uh, structured forms, right? And so this is where people started to go after the typical government-looking forms, for lack of a better description. And they started to use things like zonal recognition. And, you know, that was great. It was a big step forward because now you could automate the ingestion of some of the data.
But what about the semi-structured stuff, right? That was kind of that next hurdle that people wanted to go after. And so this is where you started to see things like Enterprise Capture. Some of us referred to it as IDR, right? And this was now starting to come out with rule-based systems instead of visual-based systems to go after those more complex documents. And it was a step forward, but it was ultimately still a rule-based system that then led us to where we sort of are here today, which is IDP. And this is now where folks have ingested machine learning and AI and models into it to really, again, take us to that next level of automation.
A couple of things to be careful of is, as folks went from that IDR enterprise capture world into IDP, you see a lot of vendors that frankly just bolted on the machine learning. So ultimately it was still a rule-based system with some machine learning in it. And in fact, some of them would actually use machine learning to build the rule. So they were still using a rule-based system. You just used machine learning to try to come up with a better rule set better and faster than the person could. And we’ll talk about all of this here as we go forward. And then of course, the next, right? What’s coming next? I think we can all guess Gen AI is a part of that, and we’ll talk about that a little bit later as we get into the presentation.
But let’s start on Rules versus Models, right? So these older systems that you see out there, and frankly you do still see them out there, the rule-based, right? And so rules could be a couple of things. It’s ultimately templates. And the template could mean you’re visually drawing boxes around a field. Okay? Here’s first name, here’s last name, right? Things like that to set up the template. Um, but to go after those more complex documents, the template could actually be code-based. And you see this in a lot of the systems, is that it is code-based, and sometimes it’s actually a sort of a cross between the two of them. But no matter if you’re drawing the template or you’re writing the code around the template, it’s still a template.
And the challenge with templates is they’re fragile, right? As the real world hits and the documents drift, and there’s changes in the documents that you couldn’t foresee, they don’t work, right? And so the way that folks like myself that were actually deploying these systems would do it, is we’d come up with clever ways to go about getting things done, like the 80/20 rule, right? And everybody’s familiar with the 80/20 rule. We’re gonna take a certain number of your top vendors—we’re gonna take the top five or top 10 of your vendors—and we’re gonna work on those, and we’re gonna get those built out. So you see some level of automation. And then as soon as we’re done with those, we’re gonna go to the next top five or the next top 10, whatever the magic number happens to be. And then we’re gonna kind of continue to do that until we get everything done.
Well, the couple of problems that you have with that is: Number one, you never get done because the number of vendors or the number of variations is somewhat unending, and it’s just gonna continue to go. But the other problem that you have is, as you finish that top five or that top 10, and you’re starting to work on the next top five or top 10, real world hits, variations happen, and you have documents that aren’t being processed as well as they should. And so you sort of have to go back and you have to do these maintenance cycles periodically to check and test and tune up all of the templates that you had previously built, right? And that’s a never-ending growing number of templates. And so it’s a little bit like painting the Golden Gate Bridge, right? As you get to one end, you just gotta go back to the other and start painting again.
And so the real problem with all of this is it’s not scalable. And so unfortunately, folks are really left with making a decision of: “Do I want scalability or do I want accuracy?” Because I can build a whole bunch of templates really, really fast that gives you, yeah, so-so accuracy. Or I could build a smaller number of templates that gives you really, really good accuracy. But frankly, it’s difficult to do both of those things. And so this was the reality, and frankly, this is the problem that we still see out there with folks that are still using these legacy rule-based platforms.
Brian Weiss: Yeah. Um, yeah, Chip, I mean, if I look back at the history of the industry, the fact that there’s so much still out there that’s built on a template-based system and a rules-based system, and you’re just constantly trying to chase the variability… it’s just not gonna happen, right? And sure, you can go after the really simple stuff that’s always locked down, but then it isn’t. And so we encounter so many customers with these older capture-based tools and their ambitions for automation accuracy are just horrible. Like, they think 11, 12% is pretty good, right? Because, you know, that’s about the best they can do.
Um, so a quick example of a customer of ours at Charles Schwab who went through this journey with us at Hyperscience, and specifically for their account openings. Now, these are critical business processes for them, right? When you’re migrating a customer over, there’s any number of different forms or applications you can fill out. Clearly, if somebody hands you a Schwab form from 10 years ago, you can’t tell ’em not to use that one. So you have to have all the variability. And they were using an old tool, I think it was a Kofax or a Datacap or something that was very much oriented around rules and OCR, and it was so inaccurate that effectively they just went and used it for classification and then kicked out. At the end of the day, most of the work was going out to people. Now, what does that do? It’s expensive. And two, it takes a long time.
So now, one of the concerns folks have is, “Well, I’ve got these thousands of templates I’ve invested in it, shouldn’t I just continue to invest in that? Because what’s my switching cost?” And they were concerned about that. So what we did with them was we rolled out a process where they chunked out by business unit, not necessarily by document. And we looked at the overall processes, including the people involved with the BPO of what it takes. And we implemented a model-based approach there and chunked through that, including taking that work that BPOs do and putting them at a “human in the loop” with the model inside Hyperscience to make it more accurate. Now they’re seeing accuracy rates at 99%. Um, and the outcome is phenomenal, such that even the first year, they kind of accidentally—they weren’t planning on it—but they saved $3 million. And that’s just chunking out, I think it was roughly between 11 to 14% of the workloads.
So Schwab is one of those customers that has fully adopted now a model-based approach to this problem. And they are seeing outsized benefits from that, both in terms of not only ROI, but speed and efficiencies in it. So an example of just kind of… you don’t need to be afraid of the switching costs. It’s not… you can chunk it out one at a time. But the key thing they did was switch that idea about where they use people and how they use it. They use it in line to make the machine better and move faster through the process, through human in the loop, instead of tossing everything out to a BPO.
So, Chip, before we move on to the next one, um, another poll question. Everybody out there: Top priority for automation. Costs, Data Quality, Compliance, Productivity, Decision Making, or Scalability? Top priorities? Don’t think too much about it, just pick one.
Chip VonBurg: I’m just gonna say there’s a couple of good answers on here. And honestly, I think they’re all really good answers. But as you look at these, I would say for me, Data Quality, that’s a big one. Reducing Costs, that’s always a big one. Um, and then Productivity, right? Like a lot of these stand out to me, but I do think there’s a couple of good ones here that, frankly, anybody that’s going through this journey has probably had to think about how this solution is gonna help me do that.
Brian Weiss: You know what I like on this is speeding up Decision Making because folks have been so kind of stuck in the idea that it’s never gonna get better faster. I’m always gonna be trading off, you know, 30, 40% accuracy with all my BPO. And how much people fail to recognize the benefit of “holy moly, I’m getting this done in days instead of months.” Uh, and what does that do for not only my overall value chain, but the end customer, right? So I… it actually comes as not actual calculus that people have put into play because our ambitions for automation have been so limited by some of the rules-based systems that are out there.
All right, uh, what’s the answers? Whew. Decision making… look, I’m getting some love there for my favorite on this one. So, Increase in Productivity, that’s the big winner by quite a bit, actually, almost 65%. Reduction of Cost is up there as well. But I, again, I think there’s so many good answers here. I think, frankly, folks that have been through this probably say “all of the above,” right? I need choice E or whatever it is. Because I think they’re all really good ones.
Alright, let’s switch chapters here for everybody. We wanna talk about AI and ML, right? So clearly a rules-based system, something that’s gonna chase another template, hopefully everybody’s clear that that’s not the way forward. But Chip, ML’s gonna come and save everything, right? I mean, we’ve got ChatGPT and it’s just, you put everything over there and it’s a magic answer to everything. It’s the silver bullet. It’s zero shot outta the box. A hundred percent does exactly what people do, right?
Chip VonBurg: Eh, not exactly. So I mean, this is the reality, right? Is again, if you think back to that sort of history of IDP, you know, before we even get into Gen AI, but as machine learning was starting to be introduced, there was this feeling of “Okay, this is really gonna be it.” And it really was an evolution of the industry, right? It was very drastically different technologies that allowed you to go after much, much more difficult use cases. So it was a major leap forward as ML started to be introduced into these solutions. The problem is not all ML is created equal, right? So if you step back and you think about what is machine learning for a second, it’s really an algorithm or a set of algorithms, right?
And so you could actually have two vendors with exactly the same algorithm being deployed, but you could get very different results from those vendors. Of course, number one, depending on how they chose to implement it and how they chose to use it. But but also based on what the system is actually set up to do and what the intention of the system is meant to do. So just because you have two systems that have ML-based technology in it doesn’t mean that they actually do the same thing. They may not both actually have all of the tools that are necessary to create that end-to-end process. So as you look at ML-based solutions, a big question you have to ask yourself is: What’s the problem that I’m trying to solve? How am I gonna grade this system to see if it actually helped me reach the success that I was looking for? And so, Brian, I’ll flip this back to you. We see lots of priorities and KPIs and goals from prospects and customers out there, but what’s your thought? I mean, what are you hearing?
Brian Weiss: Yeah. Um, look, I hear a lot of combination of two things: excitement and then confusion about AI and models. And I think what you have to do is you have to click a little bit deeper and ask yourself: What is this model built to do? And is it the right tool for the job? Um, now and initially people say, okay, there’s gonna be this big giant model that’s gonna do anything for everybody, but that’s not the reality. You won’t actually get what you need. Remember, we’re talking about a world where accuracy is absolutely critical. Getting it wrong costs you a lot of money, right?
So, the taking an approach—and Hyperscience has a very, very, very deliberate opinion about this because it’s the way we founded the company. The company was founded on deep learning, machine learning for very hard computer vision problems like handwriting. You can’t build a rule for that. So what the future really is, is an Ensemble Approach where I can give you a platform, Chip, that allows you to build a model quickly and easily, a narrow one. We don’t have to be that wide. We don’t need to be 60 billion parameters. Let’s think about what the job of that model is to do. It’s to understand information on a page accurately, right? And I can give you that that you can train on your own data, right? And so the pendulum is swinging in, luckily, I like to say that people are sobering up, that really the future is an ensemble approach where you control the models that manage your data and you can train them to be accurate on your data.
Now, I think probably what’s the most important thing, if anything, the takeaway is the factor of control. So what I see out there in the world is like, “Yeah, we’ve got this accuracy and that throughput.” Well, in the end of the day, they’re just throwing data at an API and getting it back, right? That’s yet another black box, right? So there’s a wall of APIs out there. If you dig a little bit deeper into some of the recent vendors in the market, they’re just wrapping something around some third-party model now, or not even a set of models, and you have no control over that. So ask yourself: When the model is wrong, what do you do? Right?
With Hyperscience and its models—plural—Hyperscience models are actually tuneable with your data. So you control the destiny of your accuracy. You have full transparency into what’s happening when. And you can do that with a set of models that not only are Hyperscience-based, but if I wanna pull in another model and say, “Hey, I’ve got all these answers, and I want to kick it out to Claude and see if it agrees,” you can do that as well. So it’s not a model, it’s an ensemble approach. And so I think people are really confused. They’re like, “Oh, great. What model are you using?” Lots of them, right? The critical piece is your visibility and control over it.
Um, and the example I always like to give is, look, if—and I’ll pick on Amazon for a minute here—if Textract is wrong, what do you do? Do you call Amazon and say, “I’m gonna send you four or five documents that you got wrong. Can you please make it better for me?” So that level of actually giving you the ability to deliver a hundred percent of the work with your control and visibility is kind of where you wanna be in the market. That’s the bet that Hyperscience made 10 years ago when we founded the company. Uh, and it is one really that is proving out to have the highest value for our customers.
Now, I’ll just knock on that a little bit more, right? Um, people often say, okay, it’s all about which API you’re using, or about which singular model you’re using, not series of models. And it really comes down to whether you’re shopping for a doorknob or you’re shopping for a house, right? Um, and what you failed to realize when somebody says, “Yeah, I’m gonna build you a solution,” is that if you look at the diagram on the left, this is what it takes to build a sort of a solution around Textract. Like you have to create all of this, um, not just to get data off a page, but also to understand permissioning and scaling and training data management. How do you manage the model? Well, you don’t, right? It doesn’t belong to you in this case, right?
So don’t confuse the doorknobs with the house, and ensure that you’re asking the questions of: Do you have control over that model? Where does the data go? Can I make it better over time? And how do I think about human in the loop? How do I think about, do I make it better after the fact with a BPO, or do I actually put myself in line and make that model better? So I see lots of obsession with the STP [Straight Through Processing]. It makes absolutely no sense to me. You can have a black box model that thinks it’s got 90% STP, and then you have to go analyze the data and you realize it’s only 11% accurate. And this is true of lots of the black box solutions that are out there, right? “Yay, we got STP,” but it’s wrong. And it’s up to you to figure it out. I’m seeing that a lot in the market right now.
Chip, Hyperscience’s approach here, just to baseline everybody, it’s an ensemble approach where we have multiple models, and you can think of them as a “digital worker,” and they have very specific tasks, right? Classification, identifying data, transcribing data—those are all specific digital workers. And very much like a person, the way it works is you train it. You tell it what good looks like. “Okay, I’m gonna give you a pile of 40 pages, sort ’em out.” Not only you train it, but then you also QA the work. Did it do it well? Did it not? Same way you would a worker if you hired them, right?
Now, they come really good at doing things like reading handwriting and all of that, and then you supervise their work, okay? And the critical piece here that’s different for Hyperscience is when that model or that digital worker is confused and says, “Hey, I don’t know, is this a U or an A? It makes sense in the sentence in both ways. I’m not quite sure.” The system allows that model to raise its hand and say, “You wanted me to be a hundred percent or 99% accurate, I’m not sure, Chip, help me out. Come in here and solve this for me.”
Now, you unstick that model, but at the same time, you’re also then creating a new training set for going forward. So instead of spending a ton of money on a BPO that I’ve sort of thrown out for cleaning up the garbage, a fraction of that cost goes into sitting right next to the model and making it better over time, as well as processing data. So that mind shift is something that Hyperscience has really pioneered, and our customers like Schwab, for example, are seeing outsized benefit from switching to this type of an approach.
Chip VonBurg: So we’ll give Brian a break for a moment. But real quick before we get into the survey question, I just wanna reiterate how important that was, right? So it’s that complete solution. I can’t say that enough, but it’s actually having all the tools that you need not only to create and maintain, but to actually execute, report, and make things better. And I think that’s really one of the huge differences that Hyperscience has over many of the folks that are out there.
Brian Weiss: Yeah, if you’re buying a solution built on third party models, the way you make it better is you ask for a different third party model. Like, you’re kind of still beholden to the black boxes that are embedded in whatever you bought. Take a different approach, gives you control over your destiny here.
So, folks, um, sort of question: Are you exploring Gen AI for automation? This is very interesting. Some people know not yet, and some people actively implementing, or research phase, or not yet.
Chip VonBurg: Yeah, I think this is a great question. I’m curious to see, right? You see all sorts of stats out there on the internet as far as where people are, and I know from people that I speak to, you know, you see folks kind of kicking the tires a lot, but it’ll be interesting to see how people react to this. I have a guess, but I’m not gonna out myself here.
Brian Weiss: Mm. Actively implementing Gen AI. Wow, that’s great. Love it. That’s good to see.
Chip VonBurg: And it’s interesting because that’s gonna actually align with right where we’re gonna go. So no big surprise, right? We’re gonna go forward into Gen AI. This is sort of that next piece that I spoke about earlier. So again, remember in that historical view, Gen AI is what everybody is looking to implement today or many are looking to implement today. And so you can see from a couple of stats that I pulled out there, no big surprise, but since ChatGPT’s release in 2022, the number of organizations looking at AI continues to go up. No big surprise. AI is all over the place. Everybody is talking about it.
Interestingly, the stat in the middle of the screen seems to correspond with what we just saw in the poll, which is more and more companies are actually reporting top-line benefits through the use of their AI. So folks are actually starting to see returns on that investment. But we have to remember the age-old rule of Garbage In and Garbage Out, right? Because at the end of the day, data is the fuel behind AI. And if you’re not giving it good, clean data, eh, your results may vary, right? And so this is one thing that we really have to think about as we get into the Gen AI world.
So let’s look, let’s step back a little bit, right? And just see where Gen AI is today and the reality. I think all of us would agree that we’re seeing Gen AI a lot for content creation, right? It’s an easy sort of safe task. It’s things like creating an email, creating a one-pager marketing document. It’s things like that. And it’s great for that. I mean, I use it, and I think most everybody does use it already.
However, what about the Back Office? This is where the data that fuels our companies lives. And really we’re not seeing it there as much as we should. And the reason is pretty straightforward. Number one, the systems and processes are complex. And so we don’t wanna mess with that ’cause that’s hard stuff. But number two is: What about the data? That’s pretty sensitive stuff, right? If I’m in healthcare, it’s HIPAA. If I’m finance, it’s financial data. It’s all of this stuff that could be really sensitive to our customers and frankly to my business on its own. And so the problem that you see is there’s a difference in the data used to train the Gen AI models out there. And so…
Brian Weiss: Yeah, I mean, the amazing thing about this is that the back office turns out to be the actual motherlode of value for AI, right? If you think about it, these probability calculators for words, which are the sort of the frontier models, they have eaten up every piece of public data you can train on. Exactly. Like, there is no more training to be done. If every human on the planet started training tomorrow, we still wouldn’t have enough to move the needle on those things. But the real value now about AI is not a generalized model that can write a paper on Abraham Lincoln, it’s actually the business data inside your enterprise. Um, it’s the purchase orders, the invoices, the supplier agreements, the enrollment forms. Like that’s the lifeblood of the business.
Now, the problem, to your point Chip, is it’s proprietary. We’re gonna throw that stuff over to a language model that I don’t own, that I don’t control. Um, never mind the privacy concerns, never mind the business concerns about doing that. So the unlock for AI is actually the back office data. For all of you working in the back office, and if the folks from AI with AI budgets haven’t knocked on your door yet, they are coming to knock on your door and ask you if they can get the data.
The unlock here now is to say: How do I responsibly go and manage that very valuable information to a place where I can drive, say, for example, an agent-based experience? So I can get insights out of that data. I can connect information that’s not already connected. Remember if it’s forms, invoices, contracts, things like that… some of this hasn’t been digitized, right? It’s sitting in, or maybe some of the metadata is in Oracle or in an ERP system, but really it’s actually kind of sometimes still in boxes. And a lot of the history of technology is trying to unlock that value. Well, AI gives us the opportunity that, wow, if I organized all this data in a way which a machine could read it privately, without error, and learn from all that information, and then I could turn around and ask it for help in making decisions or looking at patterns, you’d have a pretty serious unlock. Now, that is not gonna happen outside the enterprise. This is an inside the enterprise problem.
So where we find, Chip, really now is that automation technologies, IDP, Hyperscience in particular for lots of reasons, is driving the first two steps of an AI use case, whether it’s order fulfillment or mortgage administration or invoice processing. Like, it’s one thing to get the extraction right, ask what it means.
Chip VonBurg: That’s what I was gonna add, is to me, like, this is a nirvana use case that people have been talking about for years. What does it mean if I could ask the question of “What are my top vendors?” or “What are my best vendors?” Or, you know, “Am I seeing any patterns recently across all of the medical claims that are coming in?” Or “Is there potentially fraud with this particular customer?” There are so many things that you can start to do if you have the data. And I think that’s really the key is, like you said, Hyperscience is a key to unlocking that. So, Brian, what do you see in folks doing so far?
Brian Weiss: Yeah. Well, uh, I’m gonna just speak to all the mechanics of the lines on this page, but yes, it’s getting the data. But where Hyperscience is driving outsized values for customers is because we’re getting you the data in an accurate way. So we are the only solution really that’s got an accuracy harness in-line around the output from Hyperscience. So you’re not having to kick it out to a BPO and roll it back and then make sure it’s right. And as we move into Gen AI, getting it wrong is not just “oh, it’s gonna cost me a little more ’cause I gotta pay somebody to do the transcription.” But you just can’t throw dirty data at an AI, you’re throwing money away.
Um, so what we see, Chip, is first of all, the light bulb has gone on for customers who are Hyperscience customers because you’ve created this model architecture that learns on your data, that understands what it’s seeing, gets better over time, that pulls a person in to help it when it’s confused. You’re getting this very, very accurate clean pipeline of data. Now, if I also, while I’m doing that and saving a lot of money and increasing efficiencies and all of that, how about I shunt a copy of that data into a Vector Database? I can chop up a 500-page document into the relevant sections that I would want to be able to search on, ask questions of later.
Our customers are doing this just because it’s possible. They’ve already seen their ROI five times over with Hyperscience, now they’re using it to feed Gen AI use cases very, very successfully. So again, chunking up data—not just the key-value pairs, but large batches of data that have some semblance of structure—pushing those down into a vector database, then vectorize the information and then pick your model. If you wanna use Gemini, you wanna use Claude, whatever it is. You are in control now of the questions you want to ask. We are seeing this kind of “make your data RAG ready” to organizing the language of your business from the back office into a Gen AI use case more and more every single day.
Chip VonBurg: So, Gen AI, right? We’ve talked about that. We kind of went through the progression. Is that it? Is Gen AI the big thing that’s coming, or is that the only big thing that’s coming? And the answer is no. Um, so interesting article published just a couple weeks ago from our friends over at Deep Analysis. They have this paper that they do annually to try to predict what’s gonna happen in the industry next year. It’s a great read. I’d recommend you check it out if you haven’t seen it already. Um, it goes through and it calls out a number of points that they believe are gonna be changes that we’re gonna see in the IDP industry in 2025.
Uh, and one that I pulled out of a couple of points here is Orchestration. And it’s interesting. I think I tend to agree, I think orchestration is also a game changer in IDP for a couple of reasons. It’s an interesting point here, that second bullet point, right? Which is “imagine thousands of complex AI agents running across the enterprise, and somehow you have to coordinate all of that data.” I think that’s quickly becoming the reality of the landscape. You know, you have a lot of these little niche players out there that do this one or two things. And so you have people sometimes at a business level deploying these little tools to make life better, faster, and easier. But again, somehow you have to coordinate those. And how do you actually do that? It’s through orchestration.
And so this is something that Hyperscience does and frankly has done for a while in our technology that we refer to as Flows and Blocks. But what it allows you to do is take that next step. So just like it sort of says in the second bullet point, it allows us to cross the border… and Brian, you pointed this out a little bit earlier… and add some decision making in, right? So not only can we extract the data from this document or from this packet, but we can start to make the decisions. Are all the documents I need there? Is this person qualified? There’s all sorts of things, integrations with other systems. But the idea is it allows us to take that one step forward and actually start to take meaningful actions, which at the end of the day, isn’t that the reason that we’re automating? We’re not automating just simply to read the document, but there’s an action that we’re trying to take.
Brian Weiss: Yeah. And so what I’ll say is: Isn’t it cool if you could actually do the orchestration from within the AI agent itself? So from within the thing—in this case Hyperscience—that’s reading the data, that’s using the AI technology, and then within that same thing, you can make the decision. I think it’s a beautiful… I love the concept. I mean, they called it “a cross-border raid from IDP into automation frameworks.” And I find that, I mean, it’s a great phrase, but we’ve been doing it for years, right? I mean, the concept of now that I know what’s on the page and can make perfect sense out of it, why do I push that data downstream to some sort of an RPA that moves it? Just bring the data to the decision. Like, “Should I pay the claim or not?” Well, I don’t know. I can collect that information. We can make that decision. I can even do it in the context of the document. So if the user needs to be able to go look at a document, or if I’m punching that down into, say, for example, an AI agent, I don’t need to leave the context of that document to be able to make that decision.
So we’ve moved not only from human in the loop for helping the machine, but also the next phase, which is the Knowledge Worker. Pay the claim, don’t pay the claim, call the fraud, right? Are they qualified? Are they not qualified? Like you bring the data to the point of the decisioning with the documents and frame. Much more powerful place to put that work that people do. And we’ve been doing it for years, so I mean, they think it’s novel that vendors are breaking out of IDP, but it is really the logical next step. But who wants five solutions to do that? Like, do I need an IDP and an RPA and a thing and another thing… for knowledge workers to log into?
I wanna point out one other thing before we move on on Flows and Blocks, which is when we think about the title of this webinar, “Future Proof,” I think this is one of the big pieces of the technology that is doing that. Because you can see in the bullet points on the left-hand side, these Flows and Blocks give you the ultimate flexibility to do what you need to do. So whether that’s very specific configuration that you need based on Hyperscience proprietary models, or how you want to implement the human in the loop steps, that’s great. You can adjust it. It’s all adjustable. If you need to reach out to business rules or some third party systems to validate data or that decisioning process, great. You can do that. And if you want to employ some third party model… say, maybe we want to compare notes or maybe we want to shell some of this data into a third party model to get their answer. It allows you to do it. So to me, this is a big piece of how do you future proof? It’s to make the platform flexible.
Brian Weiss: Ask yourself how many systems or people you have to go through to validate, to crosscheck, to look things up, to find that kind of information in order to have a decisioning. And if you could automate all of that and have any exceptions handled and or decisioning handled in the same interface that’s handling the documents, it becomes a pretty compelling answer for that one.
You know, Chip, I wanna call out a couple of use cases here where customers of ours are really sort of breaking into that AI territory in training Gen AI with the language of the business. And of course, unfortunately we operate in insurance and financial industries and none of those folks will let us use their name, but they’re real customers. So we’ll just call it Customer Number One. And they’re really about processing medical and cancer claims. This is sort of where they’ve seen the most value. They are already using Hyperscience to do all of their processing for very complex documents, handwriting, nested tables… that’s where they started. But now of course, we’ve broadened out to all of the lower hanging fruit use cases.
Now, what they have started to do—and this is maybe almost a year ago as people were, the light bulb goes on that “Wait a minute, I’ve got this data, it’s clean. Can I just from here punch this through into a vector, which then goes into some sort of an agent that they can ask questions of?” But they are able to now in-line talk to GPT about those claims by virtue of the fact that Hyperscience has chunked that data up in a way that GPT can actually provide valuable answers. So again, pioneering into that world. They didn’t wait for the AI folks to come knock on their door. They realized it was actually just low hanging fruit for the automation team to go do that work.
Uh, same thing. The second one is a UK-based customer, but they’re more focused on applications and disability claims. And they, similar to the first customer, is they’ve taken that information that’s very clean, accurate, coming out of Hyperscience, but they’re merging it with all of the account history then that they already have. So when they’re talking about a claim or an application, they have a 360-degree view of the claim and the history of it, for which any end user at any given time can ask questions of that during the application process. So when you realize the value of the data that’s flowing through these pipes… we thought about it as transitory when you’ve just sort of IDP world, but you’re creating this motherlode of very, very valuable information over time, whether it’s fraud or patient history, those sort of things. And it’s right there for the taking as long as the data’s accurate, right? Don’t mess the data up. So we tend to jumpstart these use cases largely because there isn’t a BPO ensuring… you don’t need to send it out. It’s you’re ensuring accuracy with Hyperscience.
Um, so Chip, we’re getting at 45 minutes on the hour, which is a kind of reasonable place for us to take a little bit of an inventory here. Let’s quickly summarize what we’ve talked about today and some of our big opinions. What you’re seeing here is some of the advantages of the Hypercell, which is our core product. It is for automation, IDP, as well as what we call Hypercell for Gen AI, that takes that data and allows you to drop it, say for example, into a Vertex or a Gemini or any of the multiple applications you could use.
Chip VonBurg: Yeah. So hopefully this has been educational for folks. What we wanted to do is give you a little bit of a view of how the technologies have evolved and where they’re gonna continue to evolve. And so if we look back, the problem is still real, right? Even though technology and Gen AI and all of that stuff out there is getting better and faster, the problem still exists because they’re hard. We have rule-based systems. We talked about rule-based systems—that was great as they were innovated 15 or so years ago, and was definitely a big step forward, but they don’t really scale to where we need them to scale to cover all of the use cases.
So for the next chapter, really is about AI and how to think about it. And don’t go looking for the one magic model, right? Models, plural, is the future of where you need to be. And most importantly, think about how much control and visibility you have over the outcome of what’s going on with that model. Our customers who are seeing outsized success are banking on Hyperscience as a platform to build a set of models that complete a task that I can at any given time change. I want to add another model in to do another validation. I wanna clip into a different one. Also the concept of how you distribute human work. If you’re just buying yet another black box and you’re out there shopping for the best black box by some metric that it’s not really accountable for, you still have to pay for the people to do the work on the other side. There is no a hundred percent out of the box.
So focus on: What are the people costing you and why? And how can I get that work into a platform that uses the least amount of people for the highest outcome? And for every penny I spend on people, does it make my process better? Now that requires that you need to be able to train models. You can’t call Amazon and send ’em documents and ask ’em to make it better. You’re just waiting for them to do something. So don’t put yourself in that position. Put yourself in a position to be able to control your destiny. And sure, you can grab a third party model and put it on the Hyperscience platform and use that as well. But the final point here is cost effectiveness. So you don’t need a billion parameter model and GPUs and all that good stuff to do. Build yourself a sovereign model on a Hyperscience platform or set of them to come up with an outcome that you have control over.
The last piece of this is around the privacy of that data. So if you’re concerned about data, if you’re concerned about where you’re putting the data, if you’re concerned about who has visibility to it… Hyperscience allows you to run this on-prem. You can run it SaaS in any flavor of SaaS you want. Or you can run it in our private cloud or in our cloud, which is also, as of yesterday, accredited FedRAMP High. So that’s IL5 government data. So remember, Hyperscience grew up in the industry in both the governments and highly regulated industries. And we solve for this problem of data control, data sovereignty early on in the history of the company. We have customers operating in an air gap environment, building high performing models without ever having to hit an API outside. So those are some things that we want to make sure we highlight there.
And then I think Chip, the last one that you raised, which is don’t forget about Orchestration. Think broadly about a platform that incorporates orchestration and understanding information. Why would you separate out understanding the data with what you wanna do next with it? You don’t have to do that. You don’t need four solutions. And you don’t really need as many people as you might have in place right now to do that work.
Chip VonBurg: The big one that you hit already for me is the Deploy Your Way. And you know, again, as folks are getting into a few of these, although they’re separate topics, they sort of intermingle a little bit. And as you talk about Gen AI, which again everybody is talking about, so often it’s SaaS only, and that doesn’t work in every environment, doesn’t work in every industry. And for me, the fact that Hyperscience is built in such a way that you can deploy your way, not just based on what we think, but deploy based on what you need and based on what the needs of your business and your industry is… to me, that’s huge. We can’t sacrifice security and privacy for the sake of technology. And I think others out there are doing that a little bit.
Brian Weiss: Yeah. And look for everybody who’s out there, I’ll tease you on the topic of the next one. If we had another few minutes here, we would go into actually talking about the model creation and the model management part of Hyperscience. So when I say you can build your own models, a lot of people go, “Whoa, whoa, wait a minute. That sounds complicated. I’m gonna have to hire data scientists. It’s gonna be expensive. I’m gonna have to do like Jupyter Notebooks and all kinds of things like that.” But really, one of the advantages of Hyperscience is we have made that data science part of the work—how you annotate, how you train… we even go ahead and cluster the documents that are gonna have the highest impact on your model for you ahead of time. And the goal here is that it’s a business user that does that work. The folks who build Hyperscience models are the same resources that are helping it tell the difference between an A and a U. So democratizing what is the complexity of models—data drift, when do I train, how do I train, choosing those pieces that are gonna have the highest impact, and allowing you to test against the old versus the new—all of those things are an investment area in Hyperscience. We haven’t really covered that, but I would be remiss not to in the last few minutes that we have here.
Chip VonBurg: Definitely worth the pause there.
Brian Weiss: Yeah. So if you’re scared by like… what we are not is a high paid data science consultancy with a bag of doorknobs that’s gonna show up and sort of build you something that’s obtuse and complex and can’t be trained.
I wanna do one last call out here. I mentioned at the top of the hour that Chip VonBurg is a recent hire here at Hyperscience. He’s a deep, deep expert in this industry. And Chip’s job is to consult with our customers. And so if you’re interested in time with Chip, please reach out. Uh, Chip, you want to make a plug for yourself here?
Chip VonBurg: But honestly, as I said up front, I’ve been doing this stuff for a long time. I’ve met with customers in every industry and honestly, you know, my role is really hopefully to help you understand how to go after your automation needs a little bit better. So if you have a solution in place today and you want to see, is it giving me everything that I need? Or if you don’t have a solution in place today and you’re trying to figure out how should we go about doing it, take advantage of the time. I’m very happy to meet with folks and give you my 2 cents on how things look or how things should be laid out. And so, take a snapshot of the QR code in the bottom corner there. Reach out and we’ll get something on the calendar. And I’m really, really looking forward to sitting down with folks.
Brian Weiss: Yeah. And so take advantage. The chance that Chip has worked with one or many of the products that you may own right now are very, very good.
Chip VonBurg: I’ve touched most things that are out there once or twice.
Brian Weiss: Yeah. Once or twice, maybe three or four times. Uh, I wanna thank you all for spending time with us today. Again, this is a recurring series. We’ll come back probably with a little more on QA and that sort of thing. But thank you for your time today, and we hope to see you out in the field.
Chip VonBurg: Yeah, very good. Thank you folks.