Accelerating Hyperautomation with Integrated Solutions from AWS and Hyperscience
Amidst rising inflation, enterprises face immense pressure to cut costs, streamline operations, and enhance efficiency. Traditional automation solutions often fall short of meeting modern workflow demands.
Join industry experts Sai Kotagiri, Partner Solution Architect at AWS, and Priya Chakravarthi, Director of Product at Hyperscience, as they explore the transformative capabilities of the Hyperscience Hypercell and its seamless integration with AWS technologies.
In this session, you’ll discover:
- Common document processing automation challenges and how the Hyperscience Hypercell and AWS technologies can work together to solve them.
- How to combine Hyperscience’s proprietary AI models with industry-leading accuracy, running on AWS infrastructure, with LLMs hosted on Amazon Bedrock.
- A live demo showcasing a workflow for the commercial insurance document intake process in action in the Hyperscience Hypercell.
Watch the on demand webinar to learn how you can consolidate software investments and navigate complex hyperautomation challenges with solutions from AWS and Hyperscience.
Mark Aylor: Good morning, good afternoon, and good evening. My name’s Mark Aylor, and I’m responsible for global sales here at Hyperscience. Thank you for joining us today as we discuss accelerating hyperautomation with integrated solutions from AWS and Hyperscience.
A few housekeeping notes. You can participate in the Q&A by submitting your questions using the Q&A widget. We will address as many questions as possible at the end of the session. Check the related content widget on your console for additional information. A recording of this session will be sent to all registrants, and if you experience any media player issues or other technical problems, please visit our webcast help guide by clicking on the help widget below. At the bottom, you can see the console menu of the different widgets as we get started.
Amidst rising inflation, enterprises find themselves under immense pressure to cut costs, streamline operations, and enhance efficiency. However, traditional automation solutions often fall short in meeting the demands of modern workflows.
Join industry experts Sai, Partner Solution Architect at AWS, and Priya, Director of Product at Hyperscience, as they delve into the transformative capabilities of the Hyperscience Hypercell and its seamless integration with AWS S3, AWS Anthropic Claude 3 hosted on Bedrock, AWS Knowledge Bases, and other AWS technologies.
Discover how Hyperscience’s Hypercell, an enterprise AI infrastructure software platform, coupled with AWS integration, empowers organizations to consolidate software investments and navigate complex hyperautomation challenges, optimizing complex workflows effortlessly.
Common document processing automation challenges, and how the Hyperscience Hypercell and AWS technologies can work together to solve them. How to combine Hyperscience’s proprietary AI models with industry leading accuracy running on AWS infrastructure with LLMs hosted on Amazon Bedrock. And then also a live demo showcasing a workflow for the commercial insurance document intake process in action in the Hyperscience Hypercell. This demo will show the seamless integration of Hyperscience’s Hypercell, deployed on AWS Cloud with AWS S3, AWS IAM, Claude 3 running on Amazon Bedrock, and Knowledge Bases for Amazon Bedrock. These are a few of the topics that will be covered in today’s webinar.
I’ll turn it over for introductions. Let’s start with Sai.
Sai Kotagiri: Hey, good morning everyone. My name is Sai Kotagiri. I’m a Partner Solutions Architect with AWS. So what I do mostly is partner with ISVs like Hyperscience so we can innovate together and bring a better product to you, that is the customers, right? So we try to solve most industry use cases and we have our principles as to how we achieve this. So having said that, today I’m as eager as you to understand from Priya how our collaboration worked together to build a solution which helps your organizations in hyperautomating your document needs. So having said that, hi everyone, and Priya, over to you.
Priya Chakravarthi: Excellent, thanks, Sai. I’m Priya Chakravarthi, the Product Director at Hyperscience and responsible for the AI and ML models, their life cycles, and the way that they’re managed within the application. So Sai here, as he explained, is my partner in crime from AWS and we’ve been working together on tighter integration between our products, and this webinar is a consequence of that collaboration. And in this spirit of diving right in, let’s speak about what we’re gonna cover in this session.
In this session, we’ll cover how Hyperscience and AWS technologies can work together to hyperautomate document processing workflows. So we’ll first begin with a short introduction of the company, just a level set understanding with anyone here who may be new to Hyperscience. We then define what hyperautomation is, particularly our definition or Hyperscience’s definition of hyperautomation, which is measurable and it’s also concrete.
Then we’ll talk about what the Hyperscience Hypercell is and what the model-centric approach that we bring, the architecture that the Hyperscience Hypercell represents that enables hyperautomation—lots of “hyper” in that sentence. Then I’ll pass the baton to Sai who will discuss common customer challenges in document processing and AWS technologies that are available to reign them in. And then putting this all together, we will be looking at one specific example of challenges faced in the insurance vertical, particularly in automating the document intake process, and how we would automate this workflow end-to-end in the Hyperscience Hypercell using a combination of both Hyperscience proprietary technology and AWS technologies.
Hyperscience as a company was founded 10 years ago, and our mission was to help humans do their best work by deploying AI to do manual and repetitive tasks. The company was focused on bringing AI into the enterprise by solving complex problems and processing challenges at a very high level of accuracy. The company grew steadily and attracted investments from top tier investment funds. Over the years, the company has built an impressive partner ecosystem that helps us go to market at scale and is now part of the automation pipeline of several blue chip companies in regulated industries.
Hyperscience as a company is fueling change in the automation landscape by bringing an ML-centric approach to enterprise AI. Traditional IDP and RPA failed due to narrowly defined rules and rigid templates, which often resulted in poor accuracy and customer confidence. You could say that that’s why BPO exists today, to fix the errors or manually handle more complex use cases that these technologies could not handle. Hyperscience, on the other hand, was built with AI from the ground up with proprietary ML models that learn through production use cases and human interaction and adapt to real-time changes in data. We are able to achieve an unprecedented level of accuracy with real customer proof points. Because of this continuous learning of our models, we are also much cheaper to maintain over time, translating to lower TCO and higher ROI for our customers.
What is hyperautomation? Hyperscience’s or our definition of hyperautomation is human level accuracy and automation. In fact, the core value proposition of Hyperscience is actually orchestrating and elegantly combining human supervision, quality assurance, and fine tuning to one: accurately measure accuracy; two: to control accuracy over time; and three: continuously improve and drive higher automation. And that’s how we are able to stand behind the numbers that you see before you on the screen. It’s also why we are trusted and deployed in highly regulated industries. And you can see some logos on the screen there… kind of ran out of real estate in the previous slide, but here a list of some other companies in highly regulated industries where Hyperscience is deployed as part of the pipeline.
Hyperscience is able to achieve hyperautomation through a model-centric architecture. Now, enterprise content—that’s the list on the left which is the heartbeat of any enterprise—passes through our proprietary models. And when I say models, I’m talking about our proprietary pre-trained models—like we have models, more than 40 of them, that are trained to deskew, image rotation models, extraction models, et cetera, that are pre-trained—as well as models that customers can ground with their own data to perform a certain task. And in these models, they can set the expected accuracy as input and their automation is the output.
The entire workflow is created and orchestrated on Hyperscience’s workflow engine. We call it the Flows, and results in accurate enterprise data being fed into downstream systems. The value proposition of the product goes beyond just extraction, though. If you think of these models as digital workers, they extract, QA and supervise, and curate data. You can also ask these digital workers to decision based on your historic data without having to punt this activity downstream. I’m going to show you an example later on that demonstrates this use case. So stay tuned here. You can do this by incorporating a RAG-based, or a retrieval augmented generation-based architecture, that uses this high-fidelity enterprise data to seed your LLMs.
Priya Chakravarthi: Hyperscience’s core product is called the Hyperscience Hypercell. The Hypercell is a turnkey AI software platform that applies the power of AI and ML to all of the documents and information assets that flow through your organization. So to just double click there, Hypercell is the core product, and what it is, is a turnkey AI software infrastructure. Now, the Hyperscience Hypercell includes document processing—which is our bread and butter. It includes validation, data enrichment, workflow orchestration, and decisioning as well, built on a proprietary model-based architecture. And I just described what that is to you in the previous slide.
The Hypercell reads and understands all forms of content fluently, delivering industry leading accuracy of 99.5% and automation rates of 98%. The Hypercell can scale across the enterprise and harness the power of LLMs with auditability, governance, and security at its core using your organization’s private enterprise data. Now, I’ve explained what the Hypercell is here, and the key there is turnkey AI software infrastructure, but I won’t go through this with a whole reference architecture just now, but I promise to do this sometime in the next 30 minutes. I just want to leave you with a thought that this AI software infrastructure, the Hyperscience Hypercell, is what makes Hyperscience turnkey.
So I’ve introduced a lot of concepts there in a couple of slides, and you’ve gotten a primer of Hyperscience Hypercell and what Hyperscience as a company is and where it was founded and some history. So let’s turn to Sai now for the AWS customer perspective. Sai, if you don’t mind, can you shed some light on common industries and use cases that are looking for hyperautomation that you see in your day-to-day work with customers of AWS? We also wanna hear about their challenges and how AWS is helping these customers.
Sai Kotagiri: Sure. So firstly, thanks for that introduction on hyperautomation and Hyperscience. So hyperautomation can be used in a wide variety of industries, right? And usually hyperautomation encompasses both repetitive tasks like extracting data as you earlier pointed out, to financial analysis. And in some cases hyperautomation is also used in complex tasks like decision making. Hey, once you retrieve data from documents, you gotta apply some business knowledge to analyze data.
So, Hyperscience again, because it’s so widely and broadly used in recent years, we’ve seen that it’s being adopted by more and more companies globally. So, and here today I want to talk about a few industries. You see a lot of industries that are currently using hyperautomation, but key among them are financial organizations. Where documents such as loan/credit applications, they must be examined at a higher degree of accuracy. Why? Because they are useful for making key decisions. And sometimes when making these key decisions, corporations move to manual review to pull out sensitive information like mortgage rates, credit score, et cetera.
And the other industry where we see automation happening a lot is healthcare. And healthcare, life sciences organizations, they have an ever-growing need to search and analyze data from various forms. When I say various forms, this extracted data is very crucial in clinical research, patient diagnosis, and this automation then can lead to a higher degree of effective patient treatment. So these are all use cases that have become very essential and automation is playing a key role.
The third one, which again we’ll be talking more about, is the insurance industry, right? Which relies again on standardized forms for insurance submission, intake, could be underwriting. And again, there are a variety of documents. And Priya, I’m sure you’ll walk us through and talk to some of these forms that the automation applied in insurance.
And of course, as you probably most of you know, legal companies, they pay lawyers by the hour. And sometimes lawyers spend a lot of time analyzing documents, extracting information from existing complex, comparing data. Now instead, if automation can help them alleviate some of these data retrieval and analysis information, then they can spend their quality time and really helping people with their legal depth of knowledge. Additionally, automation is cost-saving as well as productivity enhancer for real estate management, education, accounting, and many, many more such industries.
So that having said, I want to touch upon some of the document processing use cases so that when I said that a lot of industries are benefiting, how are they benefiting us? They’re reducing their manual data entry, they’re accelerating the document workflows, they’re improving data accuracy, and they’re driving process efficiencies across industries. Now, out of all the many use cases that hyperautomation can solve in healthcare, which I spoke earlier, manufacturing, retail, and other industries, today we’ll be focusing on document processing use cases when it comes to the insurance industry. Now, Priya, again, I’ve seen a little bit of your slides. You are going to walk us through them now. So and you will show us the package of documents that insurance companies would need to process. So, again, as I understand, varied variety of documents, tons of information buried, and then extracting them is all very essential. And that’s a use case which is quite common in insurance industry.
So some of the things that automation is being used as when it comes to claims processing, which often involves filling up paperwork that includes evidence of the coverage loss and submitting it to the insurance company. And once the insurance company receives that claim document or all that paperwork, then it starts this investigation and legitimacy of the claim. Okay? So we want automation to be helping them make critical decisions and not waste their time on menial stuff like extracting data and making sense. So that’s one use case where in the insurance industry, we see a lot of automation happening.
Similarly, underwriting. Now underwriting is a process for some of you who don’t know, it is used to determine the risk of insuring a person or a business. Now, this involves insurance companies determining whether an entity possesses an acceptable risk. Now, how do they come to a conclusion that an insurance policy could be underwritten is they have to analyze numerous documents, and then they have to calculate the appropriate coverage. So automation is definitely helping in when it comes to underwriting of policies.
So again, fraud detection is another critical document processing use case that we come across a lot now. Some commonly committed acts of insurance fraud in auto, life insurance, homeowners, workers’ comp, they’re estimated to cause like 10% of losses for insurance companies. So we need an unprecedented level of accuracy when it comes to analysis and processing. And of course, the ultimate aim is all the insurance companies, they have to start preventing fraud in order to give us, the customers, all that cost savings. So and again, these are some of the common document processing use cases when it comes to insurance that we are seeing in the industry.
Now, all these use cases, when insurance companies try to solve for them, there are definitely some challenges. And for most organizations, data that is inside documents, it remains unstructured. You have multiple variety of formats that companies receive, and usually this data is unavailable for analysis and it cannot generate the right business insights. Okay? So one of the challenges we see more often is extracting text manually is very time consuming, and it’s of course prone to error as well. And most companies started using legacy optical character recognition or OCR. And it was doing its job well for pristine documents, but when the quality of document deteriorated, you didn’t get the performance that you were looking for.
And similarly, one of the challenges for manually processing documents is it doesn’t scale well, right? So it introduces variability in processes since it’s so people intrinsic. Sometimes when you have a less number of people handling more volume of documents to be processed, then of course your quality is going to be affected. Also another important thing we’ve seen as a challenge is the current rule-based systems, right? They’re very rigid, they’re not flexible, they can’t scale to new documents coming in. As a result of what happens is—again, you can probably attest to this—but in insurance you have ACORD forms, supplemental forms, statement of verification, tons and tons of forms, right? So if a format changes, again, you’ve got to redefine your systems. And of course, when you start redefining your systems, it’s going to be a lot of effort that is needed. And yeah, so these pretty much are some of the challenges when it comes to document processing that we are seeing in the industry, specifically insurance. And again, this is not common for just insurance. It’s across the board for multiple organizations. We see this as a common pattern and challenge.
Now, how is AWS helping our customers with all these challenges when it comes to document processing and automation? Now, what we’ve noticed is the biggest impact for our customers is the ability to get key information from documents into the decision-making systems. So what eventually happens is it is going to speed up the cycle, allowing them to serve more customers and redeploy people to higher value tasks. And how we help is in three different ways. So we have our AI services for intelligent document processing, which more or less help our customers in solving simpler document use cases. Then we have our Gen AI platform that is very flexible, cost effective, and it’s currently used by customers to leverage foundational models and solve some of the use cases. We probably would have a chance to look at how Gen AI is being used in a demo that’s going to follow.
And most importantly, we partnered with partners like Hyperscience who has ISV solutions that can accelerate innovation on the cloud for our customers. Having said that, I want to talk about a little bit about the Gen AI platform, because that is something that will keep coming up a lot in our discussion. So AWS offers the most flexible and cost effective platform, and we offer end-to-end capabilities with simplified tools for users. Later on, when we demonstrate Hyperscience Hypercell, we will pause and talk a little bit about a Gen AI platform, and we used Amazon Bedrock, so which is AWS’s Gen AI platform to validate data between documents or inside them. And using foundation models, we provide an interface for users to ask open-ended questions.
Now, customers can pick foundation models of their choice from leading commercial open source and third party models, and they can fine tune them with their private data. And this data can offer context about the company when questions are asked against a particular set of trained data. And of course, these AI solutions are integrated with most of our AWS services. So you as a customer doesn’t have to do any heavy lifting, and that’s mostly the capabilities we offer our customers.
But today we want to specifically talk about Bedrock. Okay? So what is Bedrock? Bedrock is a fully managed service that makes a range of powerful foundation models from leading AI providers and AWS available through a scalable, reliable, and secure API. So what happens is instead of you, again, most Gen AI applications, setting up is very difficult. So AWS wants you, the customer, to get the power of now right away, start using foundational models that we make available through our Bedrock platform.
Now additionally, Priya also mentioned about Knowledge Bases for Bedrock, which should be used. So using Knowledge Bases for Bedrock, you can securely connect your foundation models to your company data for a fully managed Retrieval Augmented Generation (RAG), a technique used by foundation models to cross reference authoritative knowledge sources before generating a response. So in a nutshell, that’s Bedrock, which is our premium Gen AI solution. And of course, the third very important one, I’d say how we help our customer is through our partnership with ISVs like Hyperscience.
So AWS’s guiding principle has always been to put customers first, and we focus on partner solutions that can help customers solve specific use cases. Use cases like how can we hyperautomate certain aspects of their everyday pipelines? So we insist upon the highest standards and help our partners build industry leading solutions. When we say industry leading solutions, we are looking for scalable, secure, reliable, cost optimized and efficient solutions so you the customer can benefit. And for that, we have our AWS Partner Network, which is a global community of partners that leverages AWS programs, expertise and resources to build innovative solutions that can solve technical challenges and deliver great customer value. Now, Hyperscience is our ISV expert partner and their solutions are currently available on Marketplace. Hyperscience and AWS, as part of the partnership, we’ve been collaborating together to deliver Hypercell. Now, having said that, I am going to pass it on to Priya to talk about the solution and how together we can accelerate hyperautomation in your organization.
Priya Chakravarthi: Thank you, Sai. I hope that the audience remembers everything that we’ve told you so far about the Hyperscience Hypercell and all the AWS technologies because that’s gonna be a quiz at the end of this. I’m just kidding. We’ve heard about Hyperscience and our core product, the Hyperscience Hypercell in the past. We’ve heard about Sai talking about AWS technologies and tools available for hyperautomation. Let’s look at the magic that we can achieve together with these two technologies.
I wanna begin this section with a sobering thought and by calling to attention that deploying AI in the enterprise is a risky proposition. Bringing AI to the enterprise for a production use case is not simple. I mean, if you are doing some kind of a generative chat bot POC, yes, it could be simple, but production is a different story. Enterprises have concerns about the trustworthiness of the model, whether the model is accurate or not, or is it hallucinating? What are the costs associated with running a model? And what percentage of that end-to-end automation costs is represented by the model? And how do we understand what it has been trained on and that it won’t use the company’s private PII data for training purposes? That’s just to name a few concerns that enterprises have.
Enter the Hypercell. The Hypercell makes it easy to deploy, supervise, QA your model and measure your end-to-end automation and make your ROI from deploying AI into the enterprise easy to understand and measure. As promised earlier, I’m going to double click on the reference architecture of the Hyperscience Hypercell here.
And starting from the bottom, you have the enterprise content that we get into the Hypercell using a variety of input connectors, like S3 email listeners, et cetera. There are more than a dozen connectors by which you can ingest documents or data into Hyperscience. And then going up, we have pre-trained models, like I said, and ready-to-train models that can be built on your data. And over there in the middle where it says “Copilot Application,” these are part of the digital workers that trains, supervise, QA, and measure the models. And they’re how you get to optimize your model to be high performing. We also allow you to bring in third party models or LLMs for specific tasks you may want to achieve on the platform. You can string all of these models and tasks together to represent your workflow like a pipeline in our Flow Studio, which orchestrates your entire workflow.
The output of this workflow, whether it’s a decision or whether you are just extracting high fidelity data, is being sent to a downstream system, or it can be used to ground your LLM with rich enterprise data, or used by our ML ops team to train their own model. So all of these are possible when you use a Hyperscience Hypercell. And we combine the Hypercell specialized models and the digital workers that manage the lifecycle of these models, Generative AI tools and other bleeding edge infrastructure components that come from AWS and build in human supervision to inspect the output of these models and validate rules for positioning within your workflow. What we essentially get is hyperautomation.
We can create Gen AI applications within Hypercell for any part of a critical user journey. A few examples here are applications where accurate data extracted by Hypercell populates a data store, such as a vector database or any other databases, knowledge search and other use cases leveraging the RAG technique to ground your LLMs. So that’s one use case. Or you could have applications built with technology partners, as in the case of AWS here, where Hypercell provides accurately extracted data for validation or decision making with the help of an LLM hosted by a partner.
Let’s look at one such example. We’ve been talking a lot about insurance document intake processing. So let’s look at how we would develop an application for that. The insurance industry relies on standardized forms for commercial insurance submission, intake and underwriting. So insurance companies must process a variety of these documents, including ACORD forms, statement of verification, supplemental form, loss run report, and these are submitted through the brokers. Automating this insurance intake process with Hyperscience and AWS technologies involves assembling input data—these could be from S3 buckets—and using our in-house fine tuned models for high accuracy extraction and using LLMs for validation and sanitization of data. Additionally, cross-checking the results against similar documents stored in a vector database is something that you could easily plug in. What is the result here is that you get industry leading automation and accuracy when you stitch together a workflow using these technologies.
And this is what the solution looks like. You have Hyperscience extracting data, validating data, and providing decisioning through our supervision interfaces. And you have a partner based, in this case AWS, LLM-assisted sanitization and validation of this data and similarity-based decisioning using RAG technologies. In this example, validation of data between various documents that are submitted for processing is done using Claude 3 that’s running on Amazon Bedrock. And Hyperscience provides an interface where a business user can also ask open-ended questions about a document using this interface. And when you have questions about “how have previous cases been handled like this?”, “what are these documents?”, and “how do they compare with previously processed documents by this Hyperscience Hypercell?”, we’ve integrated AWS’s Knowledge Bases and incorporated a RAG technology from which the workflow enables automated positioning. The embeddings model that we’ve used for this demo is Titan Text Embeddings v2, and for the Vector DB we are using the Vector Engine, Amazon OpenSearch Serverless.
And with that, let’s just dive right into the demo. In this demo you’ll see how the Hyperscience models and turnkey features can be used together with AWS technologies to create a best of breed solution for commercial insurance intake processing. The demo highlights how Hyperscience orchestrates document processing using proprietary models, data validation and analysis powered by LLMs, and a retrieval augmented generation architecture that allows for validation across multiple historic and similar documents.
Insurance document intake processing involves processing a packet of documents. These could be ACORD forms, they could be supplemental information, loss runs, et cetera, that are usually received by an insurance company as input to the underwriting process. These documents would need to be sanitized and they need to be validated before being sent downstream. We are currently on the Hyperscience Hypercell application, which can be deployed on-prem or as a fully managed highly available SOC 2 Type 2 certified Software as a Service solution available on the AWS Marketplace. The deployment can be configured to use AWS resources, for instance, Amazon RDS for databases, Amazon S3 for file storage, and the Amazon SQS for document ingestion and outputting the extracted data across multiple availability zones for high availability.
Let’s look at these insurance documents after they’ve been submitted to Hyperscience. For the sake of expediency, I’ve already processed some submissions before recording this demo. You see the ACORD forms here, which is a structured document. You see the loss runs here, and they could be multiple pages long and maybe used for generating a code. Then you could have different competitor information and supplemental information in the packet as well.
First, let’s look at the flow that processes these documents. As a business user, a Hyperscience Flow is the pipeline or the workflow that represents your business process. To demonstrate the flexibility of Hyperscience, I’m gonna demonstrate not one but three flows. Firstly, this is the standard document processing flow that shows the Hyperscience models that classify which documents are in the document set, identifies the data that you’re interested in extracting and finds its location, and finally extracting the data at the accuracy that you provide as input. This basic flow can easily be customized to incorporate an LLM, thereby mixing and matching our high accuracy proprietary models with elements to create a basic Gen AI application.
And let’s look at the way that you can create a workflow to represent the insurance document intake process. This flow is customized from the basic document processing flow. And this flow ingests documents from an S3 bucket using a native S3 integration that allows you to set up an input folder and an output folder to store the output and configure your AWS credentials to access your bucket. Once data is received, the Hyperscience proprietary full page transcription and entity extraction models extract data from the documents and creates a case. The flow extracts business rules from a CSV file and from our native knowledge store. These rules, along with the extracted text, are sent to Anthropic Claude 3 running on Amazon Bedrock. In addition, information about what each document represents are queried against historic data stored in Knowledge Bases for Amazon Bedrock. The output of all of these validations are presented to the business user in a custom supervision interface that allows them to make a decision on a case right there in this custom supervision interface. You can ask any open-ended query or question about the document and documents in the case or historic documents that can be stored as embeddings in a vector database, demonstrating how a RAG-based architecture can be plugged in easily.
This demo demonstrates how Hyperscience can unlock enterprise data to build Gen AI applications on the Hyperscience Hypercell. I hope you enjoyed the demo. And that brings us to the end of the demo. Thank you for listening to Sai and myself and also watching the demo. I’d like to now see if there are any questions from the audience so that we can answer them.
Mark Aylor: Yes, excellent. Now first of all, thank you Sai and thank you Priya for that in-depth review. Really appreciate that. We do have a few questions that have come in. Let’s start with one for Sai more on the business side. Sai, there’s a question that says: “We have heard AWS offers AI services for document processing. So why do we need partner solutions?”
Sai Kotagiri: Yeah, good question. A great point, right? So again, in the spirit of customer obsession—and AWS stands strong on it, right? We are customer obsessed. So while AWS, as you rightly pointed out, offers AI services to solve some key aspects of document processing, we also want to set a level playing field where partners should remain free to extend, enhance, and compete against AWS products. And this is a healthy competition that we encourage if it’s in the best interest of the customer and your partner products can differentiate themselves and offer better services. Yeah, we are willing to partner with ISVs to bring out that better product to our customers. Again, it’s our customer obsession.
Mark Aylor: Excellent. Thanks Sai. Uh, Priya, here’s one for you. You know, something that’s really going around has generated a lot of discussions is around LLM. So, “Does Hyperscience have any native integration with language models?”
Priya Chakravarthi: That’s a good question. Uh, yes, Hyperscience supports… you saw that workflow engine with the processing blocks. We have a processing block that allows you to configure your OpenAI credentials and access all GPT models from within Hyperscience as a block, and including models like GPT-4 Turbo and GPT-4o. So that is provided and so you can configure your prompts from within Hyperscience. You can put in your securely OpenAI credentials within your Hyperscience block and manage it within Hyperscience.
We can also install Llama 2, a 7 billion parameter model in your air-gapped on-prem environment. And again, these are accessible via a native processing block. And if you are interested in that integration, speak to your sales representative and we can show you how that’s done. And we also support all publicly hosted LLM models like Anthropic Claude 3 as you just saw in the demo, and Mistral Large for example. And these are available through API integration through custom code blocks. And you just saw one such example in the demo that I showed.
Mark Aylor: Excellent. Thanks Priya. And as a reminder for those that are still with us, if you do have a question, please enter it in the Q&A chat. Here’s one back to you [Sai]. “What are some of the factors you look for when picking a foundation model for solving automation use cases?”
Sai Kotagiri: Ah, another good question. I’d say multiple factors. It’s just not one factor that you look for in a foundation model. So at the core of it, I’d say computational resources that are available, including hardware and memory. That’s one aspect I look for. Then the speed and latency. So for a lot of Gen AI applications, speed and latency is critical and you as the application owner, you have to look at how latency can affect your application. And I’d say one factor you want to consider is: “Hey, can we tolerate longer processing times? No, yes or no.” If it’s no, right, then I go look for LLMs which can offer you that substantial speed.
Then of course, like as Priya mentioned, RAG technique is becoming very, very essential and popular nowadays. Now, how easy is it to fine tune your model? That is another aspect I’d look for before considering an LLM or FM in that case. So again, domain specific adaptability. We spoke a lot about insurance documentation processing rather. So is your foundation model adaptable to a specific industry use case? Right? And finally I’d say it all boils down to how well is it trained, the quality of data availability and the quality aspect of it. So these, at the high level, is something I’d really consider when picking up a foundation model.
Mark Aylor: Excellent. Thanks Sai. I really appreciate it. Okay, as we wrap up here today, everyone will get a copy of the recording and any questions that we didn’t get to, we will follow up directly with you one-on-one. Again, thanks for the time and the interaction and Priya and Sai thank you for the information and walking us through this. Have a good day. Thank you all. Thank you. Have a good one. Bye bye.