ABBYY Vantage - Integration for Microsoft Teams - Image

ABBYY Vantage Video – Integration with Microsoft Teams®

Watch our video to learn how you can integrate ABBYY Vantage with Microsoft Teams®.

Hello. Today I’d like to share with you a very cool integration we built to integrate Microsoft Teams® into the ABBYY Vantage experience.

Now what I’ve done is I’ve uploaded a skill into our try any skill page. And what I’m gonna do is I’m gonna trigger a Teams® notification that happens after the extraction of this document takes place. So what’ll happen is this invoice will go into ABBYY Vantage. ABBYY Vantage will extract the details and when it’s ready for a human to be in the loop, which is optional, but there are reasonable cases of when a human is in the loop, we will trigger a bot notification, which could potentially happen to a single user or even a group of users. So once this is done here processing, you will see that I will receive a Teams notification.

Okay, so this document’s done. I just received a notification, and now if I open up my Teams’ window, you will see here that I have a document that needs review. I can see a status. I can see information about that document that was uploaded. And then just for human informational purposes, I can see some of the extracted fields that we extracted on the document.

Now at this point, I can also click the “Manual Review” button so that I can jump straight into Vantage to review that document, look at the data that I extracted, and make a decision on completing or rejecting that document. So this is a really cool integration that we can do to make sure humans are in the loop when we want them in the loop for an OCR process.

“Microsoft”, “Microsoft Defender”, “Microsoft Teams”, the Microsoft Teams Designs, “Office 365”, and “Teams” are trademarks of the Microsoft group of companies.

[Music- “‘Engineered to Perfection’ performed by Peter Nickalls, used under license from Shutterstock”.]

ABBYY Vantage – Assemble Activity Video

Watch our video to learn how to use the ABBYY Vantage – Assemble Activity to intelligently separate multi-page documents into individual documents and/or transactions.

Hello. Today I’d like to show you how we utilize the Assemble Activity within ABBYY Vantage. Now, the Assemble Activity gives us the ability to control and manage how we deal with multiple page transactions or even documents with multiple pages in it, or a document that has multiple documents in it, like what we see on the screen here. So what I have is a sample that shows six pages, but every page here is actually an independent document. So these are just some sample direct deposit forms. And you can see here every single page, even though there’s a couple different types, is its own independent document.

So what we’re gonna do is use the Assemble Activity to break apart this document the way that we expect it to be broken so that instead of six pages, I actually have six different transactions.

All right. So what you’ll see here is we of course have a process skill. And in our process skill we have our standard activities like input and outputs. For today’s demo, we’re gonna drop that sample into an FTP folder. So it’s pretty simple. We will then call the Assemble Activity. Within the Assemble Activity, we have settings here. Specifically in today’s, we’re gonna use classifications. So we want the software to look page by page and determine what the documents’ is, and we’re gonna tell the software, Hey, if you find one of these given direct deposit forms or document types, we want you to consider that the first page until you find another one.

So the reason why that’s important is because some documents aren’t always as simple as having single pages for every single document. So we wanna tell the software when to split the document by using this first page checkbox. So once a software does that, we’re gonna go ahead and classify the document, and then based on that classification, we will extract from the given document type. So of course we have our action pane here that determines the classification skill that we’re pointing to. And then here I have actually extraction for two different documents. If you remember in my sample, I actually have one called ACME and one called Custom Kitchens of Bayshore. So that’s what you see here is the extraction skills that we’ve set up here for each of those given direct deposit forms. And then of course, we will export that to the FTP folder.

But the critical steps are right here in the middle. Assembling the document. Now that we have it assembled, we wanna classify of course, and then extract that information from the given document type. So let’s go ahead and run a sample. What I’m gonna do is bring up our FTP client. I’m gonna navigate to our input folder, and I’m going to drag that sample that you saw into the input folder. Now, what the software’s gonna do, of course, is monitor the input folder here, and when it’s ready and it’s come to its next polling cycle, it will grab that document.

All right, You can see the software now has pulled that document. It is now an actual transaction. If we wanted to, behind the scenes, we would see that transaction in our skills monitor. But what we’re gonna do next is we’re gonna just monitor our output. And what we would expect in this output is for the software to find each of those individual files. And we’re gonna have it export a PDF and a JSON of that file extraction that we have for each of those given document types.

All right, so here is our transaction. If I go into our transaction, you’ll see I have the PDF of each of those and a correlating JSON. So the software did me two favors. It split up the document, and then based on the document type, it extracted the given fields for us here. So this is a perfect way and a perfect example of how we utilize the assemble methodology here on a process skill. I hope you enjoyed this video.

[Music- “‘Engineered to Perfection’ performed by Peter Nickalls, used under license from Shutterstock”.]

ABBYY Vantage- Connector for Salesforce 600x481

ABBYY Vantage Video – Connector for Salesforce®

Watch our video to observe how ABBYY Vantage and Salesforce® can be utilized together to perform document classification and data extraction.

Hello. Today I’d like to show you our Salesforce® Connector for ABBYY Vantage. The concept of this connector is that we can have documents that flow in through Salesforce in a typical case management or customer portal arrangement. The document will come in potentially on a case. There will be some automatic triggering of a flow to ABBYY Vantage. So that as that document comes in, we can classify. In other words, determine that document type. We can also then extract information from that given document type.

In today’s case, we have a Salesforce case here in front of us, and I’m gonna be uploading a document that is a direct deposit form. This is just any sort of incoming form that we’re gonna post on the case. You can see there’s information here that we wanna extract potentially and use as detailed information metadata on that case.

So what you’re gonna see is we will trigger a flow from Salesforce. That document will go into Vantage. We will classify that document. And in other words, we will determine that that’s a direct deposit form. We will extract information from that form and then we’ll call Salesforce and provide that information back.

So as you see here, what we’ll do is we’ll just go ahead and upload a document. Now that that document is uploaded. This is where we could potentially have some automation. Now for today’s demo. I just have a little button I’m gonna push to trigger the automation, but this of course can be automated. So we’re gonna send that document over to Vantage. Now what’s gonna happen is, is when we refresh this screen, now that the document’s over in Vantage and going through that Vantage workflow that I just described, when we refresh this here, this description will have JSON of the details that Vantage has pulled off of that given document.

There is our description, which is really just detailed JSON of information that we’ve extracted. Now, of course, in a true automated fashion, we would probably take that JSON and parse it into specific fields on the Salesforce case. But the concept here is that we have Salesforce talking to a best in class OCR technology within ABBYY Vantage, and then providing that full loop back to Salesforce, where then additional automation can take place. So in a typical case management solution, we’d have documents flow in, we’d pass it to the OCR technology, and then the OCR technology in this case, ABBYY Vantage, would post that information back to the case where then we can triage and automatically inform that case with details that we found on that given document.

“Salesforce” is a trademark of salesforce.com, inc., and is used here with permission. The “Salesforce Corporate Logo” is a copyright of salesforce.com inc. ©2022 salesforce.com, inc. Additional Salesforce elements and icons are displayed. All rights reserved.

[Music- “‘Engineered to Perfection’ performed by Peter Nickalls, used under license from Shutterstock”.]

Related Content:

ABBYY FlexiCapture 12 – Check Recognition Video

Watch our video to learn how to process both handwritten and printed checks in ABBYY FlexiCapture 12.

Hello. Today I’d like to give you an overview of how we can process checks within the ABBYY Technology Suite. What I have in front of us is just a series of checks, a mixture of text driven text. So from like an organization or a company or a bank along with handwritten checks. So we can obviously process both.

Let’s kind of start at the beginning. This is a basic check. You can see here, it’s text. This is fairly easy from an OCR perspective. The text is fairly clean and we’re gonna be able to find that text and the fields on this check very, very confidently. So you can see here we’re grabbing anything from payee to what we refer to as the CAR and LAR or the Courtesy Amounts and the Legal Amounts, the date of the check, the check number and then of course the MICR number down here at the bottom that would give us the account, the routing number, and even the check number within it. So we delimit those details cleanly so that we have the ability to parse those individual amounts or those individual values out as needed here. So text checks are kind of basic OCR extraction perspective.

Then we get into some of the harder options. So things like handwritten checks, where we don’t control, typically the handwriting that a person fulfills on a check. So you can see here, we actually have very, very great advanced handwriting options within the ABBYY FlexiCapture Suite. We can capture and in this case, all of that information that we were capturing on the text check, but with very high confidence. In fact, this one here you can see is 100% confident, which is a very good grade from an OCR perspective.

Once again, you can see we’ve extracted the fields. We’ve actually extracted things like the legal amount recognition line. Very, very high quality here. So this is a good example.

When we do have a downgrade in reading a check, you’ll see here on this given example, there’s some downgrades here in the MICR down at the bottom. And really the only reason is because this is a marked up check that we found online for testing and so somebody’s annotated that check for us. But obviously without that, you can see, this is actually a perfect reading from an extraction perspective.

I’ll show you one more and then we’ll kind of talk lastly and briefly about how we can validate the information on the check. So here’s our last one. Anytime there’s a downgrade of confidence, the software will highlight those characters specifically in red for us, so that we have the ability to bring that to somebody’s attention and get that perhaps manually fixed and through a human review step within the technology. So at any point we can correct this and make sure this gets processed accordingly.

Lastly, there are times we wanna validate information on a check such as: Is the data the proper amount? Are the amounts proper? Do they match? Is the check number within a certain min and max? And in here we have a situation where there is no date on this check. And within the ABBYY FlexiCapture software, you have the ability to set business rules and control that logic on what fields are required and those different business rules. And anytime we find those missing or the validation has failed in audit, we have the ability to route that and track that as an error so that we can make sure that gets confronted before we pass the data and in this case, even potentially a copy of the check to the downstream step in the process.

So check processing is a really good, challenging way to use OCR technology from not just a printed text, but from a handwriting recognition perspective, a very good use case from the ABBYY Technology Suite. Hope you enjoyed the video.

[Music- “‘Engineered to Perfection’ performed by Peter Nickalls, used under license from Shutterstock”.]

Vantage- Segmentation Activity- Snip 600 x 481

ABBYY Vantage Video – Segmentation Activity

Watch our video to observe how to segment content on unstructured documents in the ABBYY Vantage Advanced Designer by creating a Segmentation Activity.

Hello. Today I’d like to talk to you about segmentation within the ABBYY Vantage product. Segmentation gives us the ability to limit where we find content typically in an unstructured document scenario. So things like contracts, leases, letters, just documents that have very limited amount of structure to them, if not any structure to them.

So what we’ve done in a previous video is we talked about named entity recognition, the ability to extract information that typically follows things like names and addresses, currencies, dates, times, et cetera. And we’re gonna kind of build on that topic here, where we can limit where we find that information based on the segments that we find in the document.

So what I’ve done is I’ve created a skill. I’ve uploaded some documents here. And now what I’m going to do is create some fields. When I create a field, what I’m going to do here is I’m gonna create a field and I’m gonna call this the “Notice of Hearing Segment”. What I’m gonna do then is teach the software where to locate on the samples that we’re using, the “Notice of Hearing” information. Now, typically we would do this over a decent size sample set. I’m only using three samples. That typically would not be recommended. You’d probably want something to the effect of 20, to a little bit more documents, to be able to train on the textual piece of what the software’s looking to find here.

So what I’m gonna do is we’re gonna create a field that we call “Notice of Hearing Segment”. And then now that we know where we’re gonna locate the Notice of Hearing, then we’ll actually say, we wanna find the date for the hearing. And maybe just for fun, we’ll wanna find also the address for the hearing.

All right. So now that we have this, one of the first things I’m going to do is go to our “Activities”. Now I have a previous activity for our named entity recognition. Let’s go ahead and delete that for now. Now what we’ll do is we’ll just start over with our segmentation. So we’re gonna add a “Segmentation Activity”. When I do that, what I wanna do is I wanna teach the software that when I output this, I will have a segment here. So I’m just going to map the “Notice of Hearing Segment” and then I’m gonna click this activity editor. The software’s gonna ask me which samples I want to use. I’m just gonna go ahead and select all of them. And then I’m going to teach the software about the “Notice of Hearing Segment”. That’s all I’m gonna do in this case. So I’m going to just zoom out so we see it. And what I’m gonna do is just gonna say, Hey, this is where we can typically find the Notice of Hearing. And I’m just gonna train the sample set, where to find the Notice of Hearing details for this sample set. It’s pretty simple click and lasso here.

And so now that I’ve taught the software about the segment, the clause or the specific spots in these documents of where to find, Notice of Hearing, I’m gonna go ahead and train the activity. This is an important step. We wanna train the software where to locate the “Notice of Hearing Segment” in this case.

Okay. Our segmentation training has completed. And what we will do is we will now go back into our skill. So now that we know the segment, there’s a couple of additional things we may want to find within that segment. One of them may be the hearing date or the hearing address. So what I’m gonna go ahead and do is add our “Named Entity Recognition” step here. And the source in this case will be the segment. And so instead of using the text from the whole document, we’re going to use just the segment that we locate called “Notice of Hearing”. When I do that, I’m going to map the hearing date and hearing address. And I can actually do that by just creating this here. And we’re gonna say, go look for the date and go look for the address that is located in the “Notice of Hearing Segment”.

One thing I will tell you about here is we have the ability to accept multiple. So if we find multiple dates or multiple addresses, we may want to locate those. In today’s demo, I’m not going to do that because it’s not relevant to this specific use case where we’re looking for a hearing date and a hearing address, but in cases where we have multiples, this is a good spot to enable that as well. We’re gonna go ahead and hit save. So what I’ve done is I’ve trained the software, where to find the Notice of Hearing information in this segment. Now I’m gonna tell it to go find me specific dates and addresses within that segment. And now let’s for fun. Go ahead and run a test on this training.

Cool. So now that we have this training complete, there’s a cool portion of the software that I want to show you. So you’ll recall that we trained the software in this sample set, where to locate the segment. So you can see that here highlighted in that very light green. But the cool part is here, the software was able to find the hearing date and the hearing address just by us, literally asking it to say, Hey, only force yourself to look in this segment. And now that you know, the segment go find me a date and an address related to that. Here you can see that as well. So the software found the segment. It’s looking for the hearing date, which in this case is the 20th day of November. And we obviously have a hearing address as well. And our last one here as well. We’re looking for the date located in this section, along with the address.

So literally within just a few clicks by just me as an end user, teaching the software where to locate the segment and then therefore being able to find some named entities. I now have a very powerful and accurate model for extracting these named entities on these documents. At this step, what I could do is simply publish our document and start processing documents in real life against this new skill.

[Music- “‘Engineered to Perfection’ performed by Peter Nickalls, used under license from Shutterstock”.]

Vantage - NER Snip 600 x 482

ABBYY Vantage Video – Named Entity Recognition (NER) Activity

Watch our video to learn how to create your first Named Entity Recognition Activity in ABBYY Vantage.

Hello. Today. I wanna set up our first Named Entity Recognition Skill. A named entity recognition skill gives us the ability to extract things like names, addresses, dollar amounts, durations, parties, locations in a document that is typically an unstructured document.

So let’s go ahead and do our first one. What we’re gonna do is we’re gonna create a skill. We are in the Advanced Designer by the way, for ABBYY Vantage. So just a note of that. We’re gonna create our first document skill.

Alright, now that we are within our first NER skill, we’re gonna go ahead and upload just a sample of documents. This is sometimes what we refer to as our sample set. The set that we will test against here. So I’m just gonna upload a few arraignment hearing documents. So documents that are just very unstructured. They come from courts throughout the world, and we’re looking for some named entities on those documents.

Alright, now that we have our sample set updated. What we’re gonna do is go ahead and outline a couple of fields. Now named entities do come in many varieties here, but for today, let’s go ahead and just extract maybe the names that we find on a document, as well as some addresses that we find on a document.

So I’ll go ahead and add two different fields. For each of these fields, we wanna make sure we hit the gear and we go into the “Advanced” tab and we make sure we allow multiple items because in a given unstructured document, it would be common for us to find multiple names and in this case, even multiple addresses. So we’re just gonna go ahead and allow multiple items here.

The next thing I will do is go to the activities when I’m in the activities flow, I’m gonna go ahead and modify this and add a Named Entities Recognition Activity. When I do that, there’s a couple of important things that we must provide. The first is the source. Where are we providing the text of the document that we’re going to look for name entities and in today’s demo, this is the whole document text. In practice, it would be very common for us to use things like segmentation. You’ll see this option here to help narrow down where we’re looking for a list of these entities.

But in today’s situation here, we’re going to go ahead and select that we have two different outputs: names and addresses. We want to click this create mapping button. And in this case, we’re going to find the people and we’re gonna put those in the names field that we created. And we’re gonna find the address entities and put them in the addresses field that we created. If you see here and we manage our field, we just wanna make sure that we have that option as repeatable enabled here on those given fields. So I’m gonna go ahead and hit save.

Now that we’ve done the named entity mapping, the magic of ABBYY Vantage takes place. And so if we run our test skill, we will go look at the results and see what the software extracted for both the names and the addresses on this document.

Alright. So now that we have some results completed here, let’s go ahead and take a peek at these results. Now here’s the really cool part about the solution. There are obviously reasons why we would want to narrow down a list of names and a list of addresses, but out of the box on a completely unstructured document, that could be one page. Could be 100 pages. It could be a thousand pages. The software can locate the names and addresses, and obviously other named entity items that we tell it to on the set of unstructured documents. So on this document here, you can see the software has located these specific names on this document. Located a couple of addresses on this document as well. If we look at our next one, you’ll see here, this document has one formal name of a person and one address listed. And if we look at our last sample here, you’ll see, we have a couple of different names located on the document and a couple of different addresses on this document.

Now this demo, we’re not gonna go a step further because we’re just focusing on the type of entities. But like I mentioned, sometimes it’s common to narrow those down to a specific spot of a given document. So in this case, we may wanna go locate the address that we’re supposed to report to on this hearing. And maybe we would want to segment so that we know that we’re only looking for those addresses in the notice of hearing location of the document. Now, obviously that location will differ based on the entity that’s providing the document, but we want to teach the software through sample set, how we’re going to recognize that location. So that’s where we would go back to our activity and we would actually modify that and teach the software about segmentation, which is frankly, just a few clicks on some samples and letting the software train itself. But we’ll focus on that in another video.

But for today’s case, I wanted to show you how simple it was to add the entities here. I will highlight that there are other entity types that we didn’t focus on, such as dates, durations, money, et cetera, but this is how simple it is. You add the fields and you map them. And then the extraction is really where the platform and its intelligence takes place.

From here we would simply just publish this skill and just like any other skill in our Vantage tool set. We now have the ability to extract name entities on these unstructured documents.

[Music- “‘Engineered to Perfection’ performed by Peter Nickalls, used under license from Shutterstock”.]

Related Content:

Vantage - ID Reading Skill Snip 600 x 481

ABBYY Vantage Video – ID Reading Skill

Discover how to set up your first identity document skill in ABBYY Vantage.

Hello. Today, we’re gonna set up our very first identity document skill together. The easiest way to do this is to go to the ABBYY Marketplace and search for identity documents. If you search for what you see here on the screen, that will bring you to this skill.

When you get to this skill, hit the “Try Asset” button and click the “Accept Asset Terms”. When you do that, the software will download a zip file that has the skill contents. Now to the eye, that zip file doesn’t mean much to you, but what you have there is the ability then to import that zip file into the Vantage technology. So I’m gonna do that here. And when we upload that, we will now have a skill called “Identity Documents” in our skills list. At this point, if I click on this, the software will tell me, Hey, we can’t change the pre-trained out of the box skill, but what you can do is duplicate it and edit it. So that’s what I’m going to do. And when I duplicate and edit, you’ll see here behind the scenes, the software is actually creating a copy of that skill for me. And at this point, I’m gonna go ahead and rename this. What I’ll do here is go ahead and publish the skill. Now I have a skill that is ready for my consumption.

Now the coolest way to try out this skill is to go to my documents tab. When I go to my documents tab, I’m gonna use a mobile upload feature to show you how you can easily take a picture of a document and get that uploaded into ABBYY Vantage.

All right, so now what I’m gonna do is click the “Mobile Upload” button. When I get that here, I will have a QR code. That QR code can be scanned by a mobile device, such as an iPhone or Android device. So what I’m gonna do is actually mirror my screen. So you can see this on my phone.

Here’s my phone, and I’m actually gonna take a picture of it and just hit that “Scan Documents” button. When I hit the “Scan Document[s]” button, you’ll see this little cool app pops up that says, Hey, would you like to open our scan documents feature? When I click “Open”, the software will take me to a window where I can then take a beautiful picture of the ID that I want to scan. The software will auto capture that picture and I can click “Upload”. When I click “Upload” the software will transfer that to ABBYY Vantage. And now if we go behind the scenes back into Vantage again, and we refresh our documents list, you will see that I now have a document up there. So that’s how simple it is just to get even a test document into the solution. Now we can even use that for a production process.

But if I just click this “Select Skill” and I search for our ID document skill. The software is going to classify and extract the information off of that document. And if I click “Review” here, you will see the contents that we’ve extracted on this given document. So you can see the name, the address, the type of driver’s license this is, other important, critical information about the actual document. If I did take a picture of the back of the document, I would have some barcode or MRZ data there available for us that we can perform a comparison against. So this is really how simple it is just to get a document within the solution. You can see. Now I already have within minutes, a document that is being classified as a driver’s license. I know the state. I know the person. And I can continue using that document and it’s extracted data in our downstream process. I hope you enjoyed this video. Thank you so much.

[Music- “‘Engineered to Perfection’ performed by Peter Nickalls, used under license from Shutterstock”.]

Related Content:

ABBYY Vantage Video – Connector for Citrix ShareFile®

Watch our video to discover how you can use our ABBYY Vantage Connector for Citrix ShareFile® to extract data from a document and then store it in Citrix ShareFile.

Hello. Today. I’d like to share with you our ABBYY Vantage Connector for Citrix ShareFile®. The goal of today’s demo is to walk a couple documents through the process skill that you see on the screen. A document will come in through a shared folder. The software will extract the critical information from the document. It will then store that document in Citrix ShareFile. The goal is to have that as a resting point for the document. A historical reference is what we typically call it so that we can access that document forever, potentially depending on the retention policies of our organization.

So this is the basic flow that you’re gonna see. Once I refresh our [Citrix] ShareFile window here, we will then see the documents that we populate from our hot folders.

So let’s go ahead and drag some documents into the system. So what we’re gonna do is just, we have a few direct deposit samples going into our shared folder. ABBYY Vantage will then grab these documents up and will go through that process that you just saw outlined on the screen.

Okay. ABBYY Vantage has now picked up those documents. They are now processing through this flow that you see on the screen. When we refresh our [Citrix] ShareFile screen, we will now see those three samples that we uploaded stored here in Citrix ShareFile for historical reference of those documents.

All right, now that we refresh our Citrix ShareFile screen. You’ll see here that I have those samples stored in Citrix ShareFile. I can click those PDFs. I can access those documents and have them just for a historical reference and uses within our organization and our [Citrix] ShareFile system.

So this is a really great example of taking ABBYY Vantage. Being able to have these documents recognized, extract the information, and then store that information in a repository so we just have future access to those documents. Hope you enjoyed this video.

Music- “‘Engineered to Perfection’ performed by Peter Nickalls, used under license from Shutterstock”.

“Citrix ShareFile®” and other marks appearing herein are trademarks of Citrix Systems, Inc., and/or one or more of its subsidiaries, and may be registered in the United States Patent and Trademark Office and in other countries.

The statements made and opinions expressed herein belong exclusively to USER FRIENDLY CONSULTING, INC. and are not shared by or represent the viewpoint of Citrix Systems, Inc. (“Citrix”). This presentation does not constitute an endorsement of any product, service, or point of view. Citrix makes no representations, warranties, or assurances of any kind, express or implied, as to the completeness, accuracy, reliability, suitability, availability, or currency of the content contained in this presentation or any material related to this presentation. In no event shall Citrix, its agents, officers, employees, licensees, or affiliates be liable for any damages whatsoever (including, without limitation, damages for loss of profits, business information, loss of information) arising out of the information or statements contained in the presentation. Any reliance you place on such content is strictly at your own risk.

Related Content:

Screen capture for Vantage connector to Google Sheets

ABBYY Vantage Video – Connector for Google Sheets™

Watch our video to learn how to extract and send data to Google Sheets™ with our ABBYY Vantage – Connector for Google Sheets™.

Hello. Today I’d like to give you a short preview of our ABBYY Vantage – Connector for Google Sheets™. The goal is to be able to classify and extract critical metadata from documents and pass that information into a Google Sheets document, where we can perform either some downstream automation or keep that information in an easy and referenceable historical spot, such as Google Sheets.

So the workflow, a typical workflow would look something as basic as this. We bring a document in today. We’ll use a shared folder that will press documents in. The software will then classify and extract from the document. We can decide if we need a human in the loop, this is an optional stage. Then we will pass that information to Google Sheets.

Now, today I’m using Explanation of Benefits documents. These are pretty common documents. There’s some header information that typically talks about patient and provider. Then there’s repeating information in this case that talk about different procedures that happen. So this is just an example document that we’re using for today’s demo.

What I’m going to do is pass this information in through a shared folder. I actually have some samples here that I’ll pass along. The software will then eventually pick these up and we’ll start processing them. When the software’s done, we will then see our Google Sheets document populated with this information.

All right, ABBYY Vantage has now picked up those documents and they are working themselves through this workflow. The software will extract the data. We’ll determine if a human review is required. In this case, that will not be required. We will pass them directly to Google Sheets for today’s demo. And as you can see, once I refresh this here, the software now is populating the proper fields that we’re expecting to see here.

This is a very, very simple use case. A very good one, frankly, for ABBYY Vantage. Gives us the ability to pass that information quickly to an easy consumable format, such as Google Sheets, where we can then perform analysis and do a very quick onboarding of these documents as well. So hope you enjoyed this video, please let us know if you have any questions.

Music- “‘Engineered to Perfection’ performed by Peter Nickalls, used under license from Shutterstock”.

“Google Sheets” is a trademark of Google LLC. Use of this trademark is subject to Google Permissions.

ABBYY Vantage Video – Connector for Box®

Watch our video to discover how to use our ABBYY Vantage – Connector for Box® to extract and store data in Box®.

Hello. Today I’d like to share with you our Connector for Box® sometimes referred to as Box.com. The goal of this connector is to take a document and perhaps it’s extracted metadata and pass that over to Box. Really just giving us the versatile means of storing this document for future purposes.

So in front of us, what we have is a very simple workflow. You can see, I have an input stream, which today is gonna be an FTP file share. So we’re just gonna drop a document into that file share. The software will extract the metadata from it. And then we will post that document to Box where it will live for historical purposes.

So it’s a pretty simple flow. You can see here is my Box.com site. I have no documents here in my UFC testing folder. So let’s go ahead and kick off this process. All I’m gonna do is simply move a document into our shared folder. Eventually ABBYY Vantage will pick this up and start processing it through that workflow that you just saw.

All right. That file is now picked up. It is processing through Vantage. And when Vantage is complete here, we will refresh our Box account and we will see that file added to our UFC testing folder.

Okay, so there, you can see it. We now have our invoice. In this case, this was an invoice document. We now have that invoice here in our Box account so that we can store it and review it and open it for historical purposes. We will always have access to it here within our Box account.

So that’s all I wanted to show you today is this connector. Making it a very simple way for us to keep our files. Making it very simple, to process them and extract the critical data and making sure that we have a historical reference of that document. So hope you enjoyed this video.

Music- “‘Engineered to Perfection’ performed by Peter Nickalls, used under license from Shutterstock”.]

“Box” is a registered trademark of Box, Inc. and/or its affiliates.

Related Content: