2026 Vibes: 5 Trends Shaping IDP & Automation Roadmaps
The era of brittle, template-based automation is over. AI systems are no longer just tools employees use, they’re systems organizations must supervise, trust, and govern. Is your operating model ready for that responsibility?
Watch GigaOm Senior Analyst Dana Hernandez and Hyperscience CTO Brian Weiss in a forward-looking discussion on the 5 critical trends shaping the next phase of Intelligent Document Processing (IDP). Moving beyond vendor hype, this session explores the market-level shifts defining the AI-first enterprise, from the rise of “vibe-driven” instruction to the evolution of AI agents from copilots to trusted co-decision makers.
In this session, we explore:
- The Trust Factor: Why explainability and transparency are becoming non-negotiable.
- The “Vibe” Shift: How intent-driven orchestration is replacing manual configuration.
- ROI Reality: Why seamless integration and business outcomes—not just accuracy—will define success.
Discover how to future-proof your automation strategy and prepare for a world where AI doesn’t just process data, but acts on it.
Brian Weiss: Hi, everyone. I am Brian Weiss from Hyperscience, and I am joined by Dana Hernandez from GigaOm to discuss the evolving trends in IDP and automation in the marketplace right now. I’m always thrilled to work with folks like Dana, because the analysts really have the dual perspective of a deep understanding of what’s happened past and what is happening going forward. And I know, Dana, that you have recently published GigaOm’s radar report, so it’s very timely, this discussion. Tell me a little about it.
Dana Hernandez: Hi, Brian. How are you today? It’s great to be here talking. I personally love talking to vendors about what’s going on and what they’re seeing in the industry as well. With the GigaOm radar, what we do is we focus on really the functionality and capabilities of solutions that we look at. In particular, this report I wanted standalone vendors that focused on IDP, not necessarily an RPA vendor that has an attachment for IDP but that you can’t buy standalone. We look at a lot of the key features like no code, low code, who do you integrate with, how do you integrate, templates, template free, industry support. But then we also look at kind of forward-looking items like intelligent workflows and language capabilities and things that maybe all vendors don’t have, but may be more emerging in the market. And then we take a look at if you have scalability, how easy it is to use your solution. And all those put together are kind of how we assess folks in this particular market.
Brian Weiss: It’s a really valuable piece of work. I enjoyed this last version of it. It’s a great read. For today, Dana, we’ve got five topics. We’re just gonna use these to focus a discussion about trends. As the CTO at Hyperscience, I invest the money in where we’re gonna go. I’m keenly interested, and I’ve been keeping track of the technology that are evolving as quickly as they are and making really strategic investment bets in how we leverage everything that’s out there. These are our quick trends that I wanna frame up for us. Let’s do the first one.
Brian Weiss: The premise that explainability, traceability, and transparency will become non-negotiable going forward in 2026 in the IDP market. There’s some interesting stats overall that a recent Harris Poll talked about AI decision-making in general, and not surprising that 85% of them out there just get stalled because of exactly this problem: explainability and traceability. How do you see that coming into play in the IDP market specifically?
Dana Hernandez: I feel like traceability, explainability, auditability, in particular for mission-critical solutions or industries is a showstopper. It’s a must-have. I think in particular in the AI world and in IDP, when you’re looking at solutions today, the ability to track back and figure out why this happened, what is the data, is it the correct data at some point in the process, is key. I find even the whole idea of explainability, traceability, end-to-end auditability used to be very focused on regulated industries. And now I see it for industries across the board. I think AI is helping drive that because when we get answers from AI, we really wanna know, is it right? Is this the right answer?
Brian Weiss: It’s interesting. The downside of AI is what’s driving the need for explainability and traceability. We see it in spades in our business, and we work with a lot of regulated and privacy controlled data. It’s also part of the realization that people got excited that, “Hey, I’m gonna have one magic model, and it’s gonna do everything.” Broad general models that don’t tell you when they might be right or wrong. The market is evolving to an ensemble where you pick the right tools, plural, for the overall job. One of them might be a large model to do a piece of work or a small model. But at the end of the day, if those things don’t tell you when they might be wrong and they’re not accountable for wrong, and the system can’t say, “Whoops, sorry, I don’t know when and how that happened and it’s wrong, and now you’re in trouble.” The power of the models is also the flip side. You get, “Wow, it can read this thing without anybody’s help,” but when it’s wrong, it’s really wrong. So I often ask the question not, “Hey, how accurate is your system?” But, “What does your system do with wrong? Is it accountable for it or not?” I see this as critical, the closer you get to mission-critical documents, the more this is important.
Dana Hernandez: It’s a showstopper for mission-critical. Definitely.
Brian Weiss: Next one, lightning round. In 2026, the vibe movement, vibe coding, et cetera, will effectively end the era of brittle template-based automation. How do you think about this in the context of the shifts in IDP and maybe the direction that we are going in the next year?
Dana Hernandez: Let’s start with the template-free or template-based. I think the industry for IDP has been moving away from templates over the last couple of years anyway. I think the vibe coding movement is helping escalate that or speed it up. But on the flip side, you’ve got to make sure that you have all the other governance and guardrails in place to make sure you don’t go deleting a company’s entire database.
Brian Weiss: HyperScience was sort of born as a model-driven AI deep learning company. So we’ve always been somewhat allergic to the idea of constantly chasing templates. It doesn’t really fit the paradigm. As these models get more and more powerful though, I see sometimes the golden hammer wrong tool for the job problem, people say, “Well, I don’t need a template because I’ve got this golden hammer that gives me the answer,” but you just spent all your money getting an easy answer. I actually see an interesting part of this where we can use the AI to build that process so the people part of getting that done is not as hard. I don’t wanna have to manage it, so can you have an AI just mark this thing up, do it for me, check the variations. It’s using AI to drive the orchestration of the best process. We’re having great success with that as well.
Dana Hernandez: I do think that people who invested years and years building libraries of templates and chasing new ones, that’s done. Whether it’s industry specific or not, ’cause now you have language models that are industry specific.
Brian Weiss: Why have a broad model? We’re fine-tuning visual language models to specific problems like pay stubs. Using a combination of a large vision code model as well as a tightened harness of targeted models at tables and fields together actually gets us a really accurate answer at the end of the day. But they work together. That’s the ensemble.
Brian Weiss: By the end of 2026, enterprise AI agents will move from co-pilots to code decision-makers in high-stakes environments. What do you think about this one? Here’s an interesting piece of work from Deloitte.
Dana Hernandez: My first thought when you read that, Brian, was the word “high stakes” jumped out at me. So by the end of 2026, code decision-makers, AI taking over, I get stuck on high stakes. I think in a high stakes world, AI will provide a lot of interesting information, options, data, hopefully traceability, and auditability for the person that’s the code decision-maker, but I still see humans making those high stakes decisions. And maybe even questioning what came from AI. I think in the lower stakes environments, I see a lot of code decision-makers and possibly completely autonomous AI being able to run through a process within a certain set of parameters, and if everything lines up, then that end-to-end process can be handled agentically.
Brian Weiss: That’s a really interesting one. Kind of almost a meta “go, not go,” “in good order, not in good order” decision-maker on top of an organized process. Do you see that embedded more in vendor offerings? My experience has been that that tends to be cobbled together between somebody who does the IDP and somebody who does the bot work and there’s been a history in automation of stitching together various things.
Dana Hernandez: I believe it’s still kinda stitched together and there’s still little exceptions where you go to a screen and you process through. I think long term it’ll be more embedded. I think ideally it’s embedded in the solution, but I think we’ve got a little bit to go to get there.
Brian Weiss: I often get asked, “are we an RPA company?” And I don’t wanna be an RPA company in the terms of moving things around on a schedule and automating remote tasks. What I see as a tremendous appetite in our customer base is to bring the decision-making about a document or a packet of documents, and the data that’s in it closer to the document. I’m seeing that a lot, folks who’ve stitched together RPA on either side of an IDP event realize like, “well, why do I need to send this out to XYZ solution when I can just check to see if it’s right here?” They’re doing validations like, “hey, did you fill in the right number? Does the number you filled in in your income at the beginning of the mortgage application match these three numbers added up across these documents you submitted?” I don’t actually need to leave Hyperscience to do that. And I can also go check a database to see if you’re a current customer and what your credit score is. So bringing the data pieces that are related to a decision is something we do actively in our large customers. It goes back to your point about the agent who does that piece of work as an assistant is coming sooner than later.
Dana Hernandez: That’s the perfect example because you’ve got the solution pulling together all these key data elements that would take a human days, weeks to do. And you’re serving it up for them to go, “Huh, let me look at this.” And if they need more data, they can go get that data as to how these elements were pulled together. But ultimately, then you have that human making that final decision. But all the data was served up more efficiently, validated, checked much faster, probably much higher quality than a human trying to go through pages and pages of information.
Brian Weiss: We are having great success in inserting agents that read, summarize, basically it’s a research assistant. Taking an entire mortgage packet, chopping it up, dropping it down into a RAG architecture, and then being able to ask questions about that if you’re the underwriter about that document like “what’s the story here? What’s going on?” Or police reports, you chunk that up, you drop that in, and as you’re processing the data, you’re creating this underlying repository of business data that you start asking questions like, “What happened on Fifth and Main at five o’clock?” That ability to seed a business specific data lake that drives AI, that drives decision-making research assistant is something we’re actively doing.
Brian Weiss: Here’s a great question for you, Dana. Originally IDP lived down in the mail room, and it was “I gotta get data off of this picture of a piece of paper in the most efficient way possible and move it into some database.” Increasingly, it’s about “how do I leverage AI to automate an entire process,” and they’re actually different interests inside of an organization sometimes. I always know which one I’m dealing with when they ask me, “Well, I wanna just get all this data and put it into a RAG and then start asking questions.” That’s usually the AI person who’s been told to go figure out why IDP is a great use case, as opposed to the IDP person who’s trying to cut their costs in half by making more efficient models to do the work. Do you see the intersection happening?
Dana Hernandez: I think the intersection is coming. I think historically, people look at AI and IDP in particular as getting the data in the system and doing it faster and more accurately than a person who would’ve taken a lot longer to get that same data into the system, the old key punch. I think that that was kinda the entry level. This is how you got people interested in it. Now that people have some of that data, then I think there are others getting more creative with “how can I use this data? What is it for? why are we doing this? Well, wait, it looks like this person qualifies for this loan, but I see some weird anomalies in these pieces of data that I just asked some questions on.” So then I can go ask more questions and get more information, and I think that’s kinda taking the ability to research and get that data fast with the creative human mind asking the questions and trying to understand what’s happening. Is it a cut and dried case, or is there something unusual going on here?
Brian Weiss: At Hyperscience, we’re moving well beyond with our customers just get data and push it into a database. We’re in the “what does it mean and what is the system of work that I’m trying to automate?” So say an investigator is trying to figure out if there’s a problem in the documents that they’re looking at that are about financial transactions ’cause they’re looking for fraud. There’s a huge box of financial transactions. Think about the complexity of that box of docs problems. I’m not gonna just be the IDP thing that puts it in the data. I’m actually gonna start to help you understand whether there’s a problem or not. Or Veterans Administration, the health claims, a lot of handwriting, a lot of variability in the document. Our partners who service that organization, it’s about a billion pages a year. They’re using Hyperscience and sort of that meta AI decision-making to not only I go/not go, but also look for the weird stuff like “did somebody scribble in the margins?” As opposed to, “Hey, stick it in exception queue,” we actually wanna read it, figure out what it means, understand if we can make sense of that application and make the decision: is the veteran eligible? We’re moving into that decision-making mode with a lot of our large customers. But I do think it’ll be rare if that decision gets automated, definitely not by the end of 2026. You’re already seeing insurance companies getting sued by people who were denied a claim because, “Hey, you just used a bot and you didn’t actually do it right to make the call.” This is sort of mission critical data, use cases will be increasing, but probably measured for true decision-making.
Brian Weiss: Decision-making integration within the system and between human and machines is the key to ROI. Here’s an interesting piece of work from McKinsey. And it’s asking which organizations see the largest return from AI, and of the ones that do, what are the best practices they follow? At top on that is human-in-the-loop. And the ones who care about this, prioritize it, engineer it, are winning disproportionately. What do you think is going on there?
Dana Hernandez: I was reading something a few days ago that was talking about how much the human brain can absorb from a data perspective, and that most enterprise organizations create more data in one day than the brain will be able to absorb in a lifetime. We have so much data that needs to be processed, and that’s the perfect thing for AI. That’s what it is. That’s what it does. That’s what it can interpret. But that key element of the piece where the brain is different than AI is in the ability to strategize and theorize and put that whole element around the data that was served up and the interpretations of that data, and the questioning of that data and say, “Huh, is this an anomaly or not? Was there something wrong in the data, or is it real?” That connection or that integration between AI doing what it does best and the human brain and what it does best, I think is really where the value is gonna come from.
Brian Weiss: I agree with that. And we certainly at Hyperscience invest heavily in that efficient engagement between machine and person. My first reaction when I saw this stat was that just because these are the people who are paranoid about models being wrong, and so are they reacting to the downside of unbridled AI, which is you don’t know when it’s wrong, and it won’t tell you, and you don’t have a way to control it, so you better put a human around it. I actually see the companies we work with who are really accelerating, it’s more embracing the concept that humans are gonna be in there, and they should be in there. And you should design the most efficient system for machines and humans to work together. It’s not just “I need a safety net, go figure it out.” No, design it in. At some point in an ensemble of models, if you’re pulling out long form information and you’re pulling out key value pairs, and you’re chunking the data up and you’re asking what it means with respect to other documents in the packet, there’s a thresholding process in that where at Hyperscience the idea is like if one of the models doing that work can’t meet its target, it’ll ask for help. And it might ask for help from another model. And now you got consensus. You have a model or two, you might ask for two or three, and now it’s getting expensive. I’ve just burned a bunch of tokens asking the same question of a bunch of models. But the idea that at a certain point, I can call a person in and have them help me figure out the answer to the questions I’m asking.
Brian Weiss: What’s really important about that from a technical religious philosophy is that the benefit of that is capturing the feedback. The one-sided systems where you get a value and a confidence score. All that does is leave it up to you to have to go figure out, well, when confidence was 65, were you actually right? Now you gotta like, “Great. Now what?” So at Hyperscience, we’re diligent about taking accountability for when it might be wrong and doing something about it in the system, which means call another model, call in a person at the end of the line for either transcription or find me a piece of data. “Can’t tell between an A and a U on a really scribbly piece of handwriting. Help me solve that.” And then being able to do that for what the business cares about. Not every field’s created equal. Maybe the A and the U doesn’t matter to me. But at the end of the day, if you ask the person what they thought and they either agreed with your idea or disagreed and changed it, that piece of information should make the process smarter. I’ve given you direct feedback. “You messed up. That’s not the right answer. The right answer is X.” Is that integrated into your process? Can the models and the processes you use absorb that feedback in a mature way and then the next time they see it, they’re a little bit smarter? That ecosystem of humans and machines, feedback loop in a very efficient way, proves to be very powerful. I’m not surprised to see that the companies who are taking that seriously are way at the edge. They’re winning ahead of all others.
Dana Hernandez: I think there are a lot of people that are taking the easy answer that comes from the solution. But I think it’s a knowledge thing. It’s a learning curve thing. They need to learn that that model may not give them the exact answer that they are really looking for, even though it seems like it’s giving the right answer.
Brian Weiss: And what do you do if it doesn’t, and would you know if it didn’t give you the right answer?
Dana Hernandez: I think there are times you don’t know. And then sometimes the human says, “Wait, I thought I saw a different answer on that.” That’s one of my favorite things to do personally, is to ask two different AIs the same question and pit them against each other.
Brian Weiss: We do the consensus shootout. I can stack multiple models on the same task and ask for an opinion. And then of course you update the model and the opinion changes. Or you say it slightly differently. And so you end up in this prompt engineering hell. This loop of like, “Well, good point, Dana.” You know? “If I look at it from that perspective…” and I’m like, “No, I asked you the question first. No, I just want the right answer. I don’t wanna tell you the answer from another AI model.” You’re trying to treat a probability calculator for words like a database, where you can ask a very deterministic question and have it go get you the answer. I think this is a pretty interesting piece to think about.
Dana Hernandez: I think it’s a super interesting idea and topic, but I think ultimately the long term is like anything, use the right person or the right machine for the right skill set for the right job. And I think ultimately there’s a human side of things that AI can’t, at least in the foreseeable future, do, and I think those two put together, each doing what they do best, is where the success will be.
Brian Weiss: I would only add that we see the most success with not just tool, but tools, plural. So we’ll just cascade through multiple models. And by the way, if none of those models succeed in giving me the right answer and I go look at that, it’s probably unreadable. Like none of us can figure it out. We see a trend now, which is people were super hyped about the easy button, and they’ve tried it and then they’re like, “Ooh, it’s not that easy.” So we’re seeing a lot of the downside of the hype cycle saying, “Oh, I realize this is a little more complicated than one magic model that I give my data and I move on.” And that really looking at an accuracy harness and potentially human in the loop and how to get a feedback for improvement and control and governance and transparency become important. We’re seeing the pendulum swing.
Brian Weiss: Business outcomes will be the critical determinant of success. That kind of seems like motherhood and apple pie to say that outcomes are gonna be critical determinant, but in the context of our industry, Dana, as AI moves from experimentation into production, how do you see enterprises defining success? There’s been a generation of “Yay, experiment.” Okay, but you just spent a lot of money on an AI project, and I’m supposed to see ROI. How do you see that playing out, specifically in 2026?
Dana Hernandez: I think as the hype wears off and the “We no longer have money to keep playing around” goes away, I think definitely tying it to some sort of a business outcome, ROI, benefit. The only reason we do IT is to support some sort of a business outcome, and hopefully it’s a successful one. Whether it’s supporting employees to do a better job or supporting customers to get what they want, ultimately it’s got to be tied to some sort of a business outcome, or we won’t be doing it for much longer. I think a lot of the success originally was in the generative AI side, search and retrieval, pulling it together, research assistant, chatbots. And it’s moving more into the predictive side of things, and I think altogether, pulling all that data together and the human in the loop piece, but it’s got to be where it provides value. If it’s not providing value, they are gonna lose the appetite to spend the money on AI quickly.
Brian Weiss: Do you see anything to distinguish organizations that are successfully operationalizing it now versus those who might be stuck in pilot mode? Do you have a measurement for a maturity curve of where people are?
Dana Hernandez: For me, it’s really if they’re tying it into the business outcomes and that they have a real business reason for what they’re doing. If I was a company like Hyperscience, I would be doing a lot more experimentation based on what my customers are potentially talking about, but maybe not pursuing because I ultimately would have the goal to support something that the customer doesn’t even know they want. But as a business in the world that’s gonna use IT to do some sort of function, whether I’m a retail organization or an airline trying to get passengers from point A to point B, my AI needs to tie into a process or procedure that ultimately has a business outcome that changes the way my company does business, whether it creates an ROI, maybe I’m more efficient. But I think it goes beyond that into how do I provide value for my employees and customers?
Brian Weiss: There’s definitely a maturity curve, and our conversations are shifting dramatically now away from “Hey, I just need you to make it more accurate because it makes things cheaper for me” to “What is the system of work that we are solving for that has people in it that could either be more efficient and/or faster and/or even just better?” The ones that’s succeeding are actually tying “if I do this, the impact to the business is the following.” You can prove the ROI of whether it’s cost takeout or whether it’s shortening your time to billing and giving you more free cash as a result of processing things quicker and faster. People are understanding how to calculate that now, and it’s part of why the IDP industry to me is the killer app for AI. Who would’ve thought that modernizing the mailroom is now really the eye of the hurricane for genuine AI-driven ROI. We’ve been preaching it for about a year now, but we see that.
Brian Weiss: I’m not surprised that this list includes insurance, industrial goods, transportation logistics, notoriously analog worlds that we’re now able to solve hard problems in. Because that surface area of the public is never going away. There are entire systems of work that are built around forms and handwriting. You’re never gonna get around it. That surface area is complicated, and it’s gonna get more complicated. So we see it for those who are asking “How can I use this AI stuff to accelerate parts of this in a mature way?” And it’s also governed. Great examples in banking and insurance that we have for claims processing, claims optimization. We see folks bringing in things like, “Let’s not only read what’s on the page, but I wanna look at this picture of the broken up car, and tell me what you know about it ahead of time, as well as watch the video of the crash and tell me what I need to know ahead of time.” I’m still in a research assistant mode, but we’re moving that direction very quickly to be able to quickly give you an insight as to whether this is an I go, not go.
Brian Weiss: Takeaways, Dana. Moving decisions closer to the data. I think there is inevitable collapsing of the separation of “this thing reads stuff off a page, and then other bots go and move data around, and then decisions get made somewhere else.” The efficiencies of doing it in architectures that embed the decision-making in the data movement because you’re still close to the documents. Our customers who need to make a decision about a claim sometimes they need to go look at page 50 of the document, or you’ve got a banking stress test that you’re trying to fill out which is 65 pages deep, and you need to be able to make the decisions, bring data to the documents. Not just put data in a database, and then here’s a ZIP file full of the supporting documents. You’ve kind of just kicked the can down the… So we’re seeing that a lot, and it sounds like you’re experiencing the same sort of thing.
Dana Hernandez: I’m seeing that as well, and I think it feels like the last 10 to 15 years has been about data. And with AI, initially, people are like, “Oh, it’ll fix the data problem.” I think it’s just made the data problem more critical and more of an issue. So we can’t get away from data is the issue. And the more accurate that data is, the closer you are to accurate data, I think is key.
Brian Weiss: The next one we just touched on, which is operationalizing governance, and that includes human in the loop isn’t a fallback. It’s actually a design principle. And getting a really efficient way to get machines, plural, and models, plural, and agents, plural, to operate with people, is I think where the 2026 we’ll see that principle played out. It gives you auditability, it gives you transparency in the decision. You can control your destiny. You can decide what data goes into a RAG to be able to power a bot someday. I imagine you probably seeing the same sort of flavor to it.
Dana Hernandez: Definitely seeing the governance, definitely seeing the need for the guardrails and the auditability, the end-to-end traceability becoming even more critical. And the pairing up of the human and the AI each doing what they do best inside this governance model to make it happen.
Brian Weiss: The last one: multi-model orchestration. There is the hype that, hey, there’s a magic model, easy button that does everything. Actually, broad generalist models are not good at everything, and they’re kind of expensive sometimes to use. I don’t need that to tell me the square root of 54. I have a calculator to do that work. Use a multi-model approach, a multi-agent approach, potentially accuracy harness and also maybe consensus, to drive the best quality answer for the least amount of money. I still gotta pay somebody to do all that compute. So that multi-model orchestration, I think will be a hallmark of 2026. You’re already seeing it. You’re already recommending things just like this.
Dana Hernandez: I’m definitely seeing it in very industry-specific ways, and/or different capabilities. I think it’s the way the world is gonna continue to shift, because those smaller models or those topic-specific models can almost give you a better answer faster with less hallucination, with less retraining, and putting two or three of those together to go through an entire complex process makes all the sense in the world.
Brian Weiss: Well, and it’s cheaper. My GPU bill for my tokens, it’s actually more cost effective to get a very tidy answer from an ensemble, especially if you accuracy harness. If each of those is taking accountability for when it might be wrong, and when it’s wrong, you can create a learning cycle. You actually have that kind of virtuous feedback loop that you can build in, then it becomes really powerful. Flexibility is gonna determine the scalability. The flexibility to do that, the people who win will figure this out and lean in as a long-term strategy, not just a short-term impact. I think we’re coming up on time, Dana.
Dana Hernandez: That was fast.
Brian Weiss: It was fast, I know. It went really, really fast. These have been great questions. I wanted to say thanks. Any last thoughts? You wanna prognosticate something about next year that we haven’t covered yet?
Dana Hernandez: I think it’ll be interesting doing the next report because several of the things that you brought up are emerging so fast or changing so rapidly, it’s gonna definitely impact the next version of this report and the whole industry as a whole. A lot of the governance, the trust but verify, the human piece of it, the machine pieces of it, I think they’re all important in the coming year.
Brian Weiss: I really appreciate your insights and working with you, Dana, and for taking the time today to run these topics. Looking forward to the next report and whatever piece of work you put out next. Thanks to everybody who joined Dana and I for this conversation about evolving trends in IDP in 2026.