Transforming Business Processes with AI-Led Automation
By leveraging the power of AI, organizations can transform even their most complex business processes, driving growth, efficiency, and market differentiation. Watch this on-demand webinar to see how an AI-led approach to automation surpasses the limitations of legacy, rules-based systems and uses human supervision to ensure the accuracy and reliability of the outcomes.
Greg Hauenstein: Welcome to today’s webinar, Transforming Business Process Automation with AI-Led Automation. Today’s session, we’re going to be going over a few different topics, both talking about the history of where organizations have been and the options that they’ve had available to them, as well as offering insight into the future we see and the point of view we have on how AI is evolving capabilities and the impact that it’s going to be having today and in the future on leading organizations.
Greg Hauenstein: My name is Greg Hauenstein. I am one of the account executives here at Hyperscience. I’ve been here a little over two years and been in the document management, ECM, and RPA space for over a decade. I’m joined today by Stephen Yanchuk. Do you want to introduce yourself, Steve?
Stephen Yanchuk: I appreciate the time here. Looking forward to taking you through the product and getting you excited about what we can help out with from an organizational perspective. I work alongside Greg here from a technical perspective.
Greg Hauenstein: At the end of the day, we can all agree that there’s a constant flood of requests for enhancements and an operational drive to try and make things better than they are today. When we look at the technology landscape, there’s been a number of periods of disruption that have really paved the way for the things that we’re capable of today. AI being this next wave of how is it going to be brought in, what is it going to fundamentally change? We at Hyperscience started the entire company with a foundational hypothesis that we could use machine learning and artificial intelligence to fundamentally improve the lives of our customers, to let them use tools to lift the mundane out of their business processes and focus on higher value work.
Greg Hauenstein: When a lot of internal disruptors have tried to say “how can I do that?”, a lot of times RPA has been the tool that most organizations go to first. It’s how do I actually use a bot to be able to emulate the actions of my human team member? But at the end of the day, as digital transformation initiatives were undertaken, what ends up happening is that you get into a situation of backwards-facing automation where someone has to predetermine all of the things that might happen in a business process so that error handling or exception handling can be put in place. This leads to a significant underutilization of the technology itself. Only 40% of licenses are ever actually being used. Even when they are deployed, only about 60% of projects actually get to that point of meeting expectation.
Greg Hauenstein: The template-based, if-then, hard-coded logic methodology can’t account for the variability and the unstructured nature of what comes into a lot of these processes. The first mile of any automation journey is all about getting to data. We can’t know what rules to process unless we have data available to start acting upon. The real macro problem is that most of the data coming into organizations, 80% according to Gartner, is unstructured. It’s not a very clear form that you can use a template to extract information from; it’s emails with correspondence information, text-heavy long contracts. The unpredictability and variability in these processes makes it something where the legacy tool set isn’t going to work going forward.
Greg Hauenstein: The bridge to the future is AI. We’re seeing organizational leaders take that step into needing tools that have more cognitive abilities and utilizing machine learning. When you think about a technology project like this, think of it like an employee that you’re bringing in to address that core problem. Do you want to hire the next employee that you have to train and then every time there’s something new you have to go back and retrain? Or do you want to hire the employee that uses the information and experiences they’re having on the job to make recommendations and form an opinion on the best way to handle the unknown? That latter option is the analogy for AI. We want systems that can learn and use the information of the past to make predictive automation on the unseen.
Greg Hauenstein: The challenge when trying to leverage all of this is most leaders think that adding AI or developing AI creates a massive point of delay and requires bringing data scientists in-house. But the reality is that platform tools like Hyperscience abstract most of the actual data science and pipeline building away so that there’s a business user-based tool to just focus on how to train the outcome. GPT-4 is going to cost some estimate over $150 million to retrain. But Hyperscience has given you the ability to leverage both our underlying NLP and the models you can create with as few as a hundred documents to train on your own data in your own environment and deliver differentiated results with novel technology.
Greg Hauenstein: Document processing is a first-mile problem in a lot of organizations. Every bit of service delivery starts with a document or text, and you have to figure out and provide meaning from that stream of unstructured information so that you can actually do the rest of the operational process. I’m really excited to hand it off to Steve so that he can go through showing you what these different tool sets are and the different ways that you might be able to leverage them inside of your organization.
Stephen Yanchuk: I appreciate that, Greg. I want to introduce you all to the Hyperscience platform in a way that will tell a story that not only shows the full breadth of the platform but may also get the gears turning on how we are helping many of our customers. I want to take you through a manual resource-intensive process in the mortgage industry and not only improve on these inefficiencies to allow for cutting costs but also potential improvements to the customer service experience.
Stephen Yanchuk: What we’re looking at here today is the Hyperscience platform, which is a single pane of glass for building your models, data keyers processing documents, administering the platform, and reporting capabilities. We view our platform as a business user tool. It is intuitive, simple, and does not require coding experience to get up and running. Documents waiting to be processed are here in the submissions tab. You can also view these from a task or high-level supervisor role where you can build specific SLAs and queues.
Stephen Yanchuk: When documents come into Hyperscience, they all hit a flow. I like to think of a flow as your workflow, that left-to-right conveyor belt experience where documents are entering the system. We have a number of out-of-the-box connectors that you can tie into. What I want to focus on is the fundamental aspect of how Hyperscience works: it’s the human and the machine working together to meet your target accuracy. Out of the box, you can set your target accuracy. It’s your north star, and everything is built around that accuracy. We’re not going to automate at the sake of your accuracy.
Stephen Yanchuk: Three fundamental things are going to happen here using our AI and ML capabilities. First, the machine is going to use classification models to say what type of document am I? Am I a W-2? Am I a pay stub? Next, the machine is going to identify the pertinent data points using field identification predictive models. Finally, using our fine-tuning transcription models, the machine is going to try to get that data downstream. The most important piece is that the human and the machine are working together to maintain that accuracy. At any point in time the machine is not sure it’s going to meet that accuracy, it can raise its hand and tag in its human friend.
Stephen Yanchuk: Let’s dive in and see how this process works. A submission is a single document or a batch of documents. The machine has already done some of the work for me and has classified the documents using that classification model. We have our W-2s, a Fannie Mae document, a couple pay stubs, and a bank statement. That classification model is all built within your library where you can take structured, semi-structured, and unstructured documents, build your layouts, and train your models.
Stephen Yanchuk: If the machine ever does need my help, it’s going to tag me in so I can assist. The machine is waiting for me to help process and get these documents downstream. That first step is the field identification. I no longer have to focus on the entire document but only the data points that the machine asks for. My only job as a data keyer is to simply point and click and find the data point that the machine is asking me to help with. In this case, it looks like the social security number. The machine already knows all this data exists; it’s done a full page extract, and now I just need to give context to that field.
Stephen Yanchuk: On the bank statement, we can also handle tables. Whether the tables are single page or they span across multiple pages, the machine is asking me to review to make sure that it has found everything accurately. I can quickly tab through. If the machine was not super confident, it might highlight some stuff in a yellow border. This whole process becomes a very simple point-and-click solution. Instead of scanning each document and looking for data points, I can have the machine find most of the data points with minimal help from me.
Stephen Yanchuk: The next step ties back into our flow. I mentioned before we could add business rules to help automate. In this flow, I want to automate some other processes. I want to have the machine make sure I have the correct documentation: a certain amount of W-2s, the last two bank statements and pay stubs based on specific dates. And I need to ensure that the stated income across all of these documents has less than a 5% variance. It’s creating action from that data. Today I’m doing this fully manually, entering that data into a spreadsheet and doing the math on my own. That’s introducing risk and potential errors downstream.
Stephen Yanchuk: Here, the machine will collate this data for me into what we call custom supervision. This allows me to quickly review all the data, where it was found on the documents based on the rules we set. This first view is much more detailed based on the rules. It lays out that we need the last two years of W-2s, the stated income from the mortgage submission and the W-2, and calculates that variance. We have one bank statement; I think we need two, so that might be a problem.
Stephen Yanchuk: We can streamline this a little bit more where we can tie everything together in more of a decisioning process tied to a case. I can come in here and say, okay, the machine is making a recommendation that we have all the W-2s so I can click valid. Stated income looks valid. A bank statement, we know that was invalid. I can take all of that extra unnecessary data out of the equation, let the machine ingest data, analyze it, and then guide me on whether or not the information is validated or not.
Stephen Yanchuk: Let’s take it one step further. What if the applicant sent in some extra data with maybe some important information on it? I’m expecting the loan document, but maybe I’m not expecting this extra handwritten note. Most of the time maybe I’m going to ignore it. But with Hyperscience we can create actioning and capabilities to ingest that ancillary piece of document and filter through it for us instead of having to review every miscellaneous piece of document. With our text classification models, not only can we extract all of the data from that iPhone picture of a handwritten note, but we can also train models to identify things like PII or the text sentiment, which then you can flag up for a human review.
Greg Hauenstein: This example Steve’s walking through here is one that just there would be no option to be able to handle with the legacy tool set.
Stephen Yanchuk: I appreciate that, Greg. I can set specific fields. On this loan application, I know that the social security number needs to be accurate, so I always want to have a human check this out. Even with this poor handwriting with the scribble outside of the box, the machine did a good job there. But what about that handwritten note? Most of the time I would ignore it, but now the machine is actually flagging this to my attention saying hey, we have some PII, let’s review it. And then there’s also a negative customer experience. That negative sentiment. We can train models to detect this based on your data. I can see here the website was hard, they had an issue. Now I can create an action from this. I can say, well let’s get someone to proactively reach out to this customer to try to help out here.
Stephen Yanchuk: Maybe we want to create workflows where we can get the PII from this document that the machine has detected and compare it to the PII on here and make sure it’s accurate. If the social security number is different, we know that that is inaccurate and we want to flag that. We want to reject this document from getting downstream. We can use the flow to trigger an email or a call to the customer. We can even automatically redact specific information like social security numbers based on what is important to the organization. With our AI and ML capabilities, we’re able to take a very manual and cumbersome process that introduces a lot of risk and creates a much more efficient throughput.
Greg Hauenstein: Thanks Steve. The biggest message I want to make sure lands is that document processing is just the beginning. Being able to turn that picture into data is just step one. I know one question I get a lot is: how do I actually go from Hyperscience as a platform to a predictive model being able to be used in my organizations?
Stephen Yanchuk: I can cover two different ways. The first is structured documents, which are probably what you’re familiar with today on all the templatized type documents like that loan document. With our platform, you can get day-one automation being able to upload a blank, draw your boxes around the fields, and start processing. But that’s the easy part. What’s that next layer? That is most likely where your challenges are at, like a pay stub or an invoice where they may look different and be in different places. To try and create a template like this is an admin nightmare. With our model capabilities, without needing to code, you can create buckets of types of documents and simply upload your training documents. We only require a few hundred. Not only can you have the machine analyze this data and put them into clusters, the machine can make suggestions on where you should have more documents or less documents so you have a nice base non-biased type model.
Stephen Yanchuk: From there, it’s really back to that similar point-and-click solution where I train a few hundred documents where I’m pointing and clicking. I’m just putting data to these names of the fields that we want to capture and then running training. Once we start to train and annotate those documents, we can reanalyze and catch things like if we are inconsistent. We can let the machine keep us consistent as well from here.
Greg Hauenstein: When you look at how the Hyperscience tool has abstracted data science away, what we’re actually doing is treating a use case just like we would treat a new employee coming into this work stream. We’re showing them how you interpret this document, here’s what the actual data is structured like, and giving examples. But then when we hit that run training button, it’s then the same thing as a human team member. It understands all of those underlying heuristics of what makes a document have this structure or what this data means so that it can automate on the unseen and can be able to actually be predictive while balancing accuracy.
Greg Hauenstein: I appreciate you going through that detailed explanation there, Steve. If there’s anything in here that you specifically think your organization wants to get that next level deeper of understanding, go ahead and hit that book a meeting button. I just want to thank everyone for spending some time with us and joining us to go through this, hear our point of view, and understand what our platform is capable of.