Hello. Today I’d like to give you an overview of how we can process checks within the ABBYY Technology Suite. What I have in front of us is just a series of checks, a mixture of text driven text. So from like an organization or a company or a bank along with handwritten checks. So we can obviously process both.
Let’s kind of start at the beginning. This is a basic check. You can see here, it’s text. This is fairly easy from an OCR perspective. The text is fairly clean and we’re gonna be able to find that text and the fields on this check very, very confidently. So you can see here we’re grabbing anything from payee to what we refer to as the CAR and LAR or the Courtesy Amounts and the Legal Amounts, the date of the check, the check number and then of course the MICR number down here at the bottom that would give us the account, the routing number, and even the check number within it. So we delimit those details cleanly so that we have the ability to parse those individual amounts or those individual values out as needed here. So text checks are kind of basic OCR extraction perspective.
Then we get into some of the harder options. So things like handwritten checks, where we don’t control, typically the handwriting that a person fulfills on a check. So you can see here, we actually have very, very great advanced handwriting options within the ABBYY FlexiCapture Suite. We can capture and in this case, all of that information that we were capturing on the text check, but with very high confidence. In fact, this one here you can see is 100% confident, which is a very good grade from an OCR perspective.
Once again, you can see we’ve extracted the fields. We’ve actually extracted things like the legal amount recognition line. Very, very high quality here. So this is a good example.
When we do have a downgrade in reading a check, you’ll see here on this given example, there’s some downgrades here in the MICR down at the bottom. And really the only reason is because this is a marked up check that we found online for testing and so somebody’s annotated that check for us. But obviously without that, you can see, this is actually a perfect reading from an extraction perspective.
I’ll show you one more and then we’ll kind of talk lastly and briefly about how we can validate the information on the check. So here’s our last one. Anytime there’s a downgrade of confidence, the software will highlight those characters specifically in red for us, so that we have the ability to bring that to somebody’s attention and get that perhaps manually fixed and through a human review step within the technology. So at any point we can correct this and make sure this gets processed accordingly.
Lastly, there are times we wanna validate information on a check such as: Is the data the proper amount? Are the amounts proper? Do they match? Is the check number within a certain min and max? And in here we have a situation where there is no date on this check. And within the ABBYY FlexiCapture software, you have the ability to set business rules and control that logic on what fields are required and those different business rules. And anytime we find those missing or the validation has failed in audit, we have the ability to route that and track that as an error so that we can make sure that gets confronted before we pass the data and in this case, even potentially a copy of the check to the downstream step in the process.
So check processing is a really good, challenging way to use OCR technology from not just a printed text, but from a handwriting recognition perspective, a very good use case from the ABBYY Technology Suite. Hope you enjoyed the video.
[Music- “‘Engineered to Perfection’ performed by Peter Nickalls, used under license from Shutterstock”.]
Learn about how you can use the Reporting Service to gather important information and statistics about documents going through the ABBYY FlexiCapture workflow.
Hello. Today I’d like to share with you our FlexiCapture Reporting Service. This Reporting Service is really a background service that captures information and statistics about documents that are going through a workflow of ABBYY FlexiCapture. Now after installing the service, one of the important things to do is to visit the Event logging mode page in the Administration and Monitoring Console. Within this section, you’ll see that we have a URL. We need to provide the URL specifically to the Reporting Service, and then FlexiCapture will post and make calls to this Reporting Service URL to capture the data.
There’s really not much to see from a FlexiCapture perspective, except the type of data that we extract. And to do that you’ll notice that when we install the FlexiCapture Reporting Service, we establish a database. That database is where FlexiCapture will warehouse the information that we want to collect. We can get items down to a document level, so we can see all sorts of things from the document itself and page count and those sorts of things to even field level specific information, or even deeper, a little bit page level information. So we can get quite interesting here. We obviously, as you can see, capture information about a user interaction, also suspicious fields, and when fields and pages are changed. So there’s a lot of data that FlexiCapture is capturing in this Reporting Service.
Now, one of the cool ways to kind of start seeing and glancing at this data is that we have sample reports. If you don’t have access to these sample reports, please reach out to us. We’ll provide them to you. But to kind of give you an idea of what these reports look like, I’ll share them with you quickly. So we obviously have information now that we can look at that database. We have information about Page Classification. We have information about Document Type Classification and statistics around that. We can obviously get Field Level Information. So what fields did we extract? What was the confidence levels? Those sorts of things. We could even get down nitty gritty to the types of information that we’ve extracted as well. And then of course we have what we call Verification Quality. So why did these documents need verification? Who reacted to them? What were the errors? What were the values that we’ve extracted? What business rules were violated? And how the software assembled? So there’s so much information that this Reporting Service can capture for us, but it’s just now being able to understand what we have access to. So this has been a unique addition to FlexiCapture because it really does give us visibility from the data.
Now, one of the more important things I will call out is that as documents start processing through the Reporting Service, if you want field level specific information, you will need to go into your project and you will open your document definition. When you open your document definition, the fields over on the right side, will need to tell the software whether or not you’re going to track them as an Indexed Field. Indexed Fields are the values that will save to the database otherwise information about the field specific data will not be extracted, but using this property will enable us to get that specific information about the field.
So I’ll show you an example. If I query our document field table here, you’ll see, because I have Last Name on my document definition as an Indexed Field, the software will now capture the value. Otherwise it will not capture the value for us. It will note that we found it, but it will not provide the value unless it’s an Indexed Field. So that’s just one important thing to remember about the Reporting Service. But otherwise you have access to a ton of data that gives you the ability to look at your workflow and figure out ways that you can start making good business decisions on how to increase things like straight through processing and also looking at the number of staff you have involved in a process as well.
So very important information here that’s provided to you from the FlexiCapture Reporting Services. Not much to see on the FlexiCapture side, but really the item that I want to share with you today is all about backend data. And the fact that we have access to all of this data that gives us the ability to make intelligent business decisions. Thanks so much for watching this video.
Watch a video to learn how to integrate Automation Anywhere™’s RPA technology with ABBYY’s OCR software, ABBYY FlexiCapture.
Hello. Today I’d like to show you a video of how we integrate Automation Anywhere with ABBYY FlexiCapture. The combination of these two tools gives us a really unique advantage in using a best in class RPA through Automation Anywhere, and a best in class OCR technology through ABBYY. And it’s actually a pretty simple integration, just a few steps to walk through. So what I have in front of us is a bot. It’s just an automation bot. And the purpose of this bot is to call the ABBYY Web Service API. Now the ABBYY Web Service API is very advanced. A lot of cool unique methods in there that we can call and customize our document capture workflow. But you can see here, it’s actually fairly simple, at least in the way that I’m interacting here with this bot.
What we’re going to do is we’re going to open a session. We’re going to open a batch. We’re going to pass it a file. We’re going to close a batch and then we’ll tell the software to process a batch. So it’s actually quite simple. Now before I go into a few of these steps and describe kind of our approach here, let me just run the bot. And the purpose of the bot here is to make sure that we pass a document into this batch. So right now there are zero documents and zero pages. We’re going to pass a document in here and the software will automatically process it. I’ll go back to our bot and I’ll hit run. When I do this, the software will perform those series of API calls. And when it’s done, we will have a batch within FlexiCapture that has pages. I’m going to just go ahead and refresh my screen. You can see now the software is Processing, and now it’s in Verification. Now that I’m in Verification, I have a document that’s been passed in to ABBYY for further OCR processing. So that’s kind of the idea of the bot.
Let’s go into some of the specifics of how we make this happen. First off architecturally, we use the Web Service API, like I already shared. It’s very powerful, gives us just a ton of flexibility in the way that we process documents. But one of the more important ways that we interact with the API is everything uses what we call a “session”. So if I expand our session, you can see here, I have a REST Web Service activity where we’re going to post to a Web Service URL variable. Now it’s pretty simple. I’ll just show you mine. This is on my local system, so you can see the value there, but I’m going to post it. I have a couple of disabled steps here just to see some responses. And then we’re going to go ahead and extract the text. That text will tell us the “SessionId”. That SessionId is fairly critical to downstream processing. So it’s an important value that we want to keep, and we will store that in this SessionId variable. So then as we interact with the API going down, for example, I may want to open a batch. Well, when I open a batch, we have to send the Web Service API here in value step number 10. In this post, we have to send a series of Json. So in order to do that, you can see here what I’ve done is I have stored our Json in a variable called “Open Batch”. So if I look at our Open Batch, you can see here, I have my Json here.
Now, a couple of interesting parts of this Json that you’ll see is there’s a “sessionid” and a “batchid”. Once again, these are other values that have been provided either by other steps or by other variables. And so we need to populate those values in our Automation Anywhere workflow. And the way that we’ve done, at least for this demo today is we just simply use a “String: Replace”. So you can see here, we’re going to go take that Json. We’re going to go look for this kind of flag of the variable, and we’re going to populate it with another variable that was populated from another step. And so that’s how we replace that sessionid and batchid so that when we have our Json, and we post it, then we have the ability to pass in those variables with the proper values.
The only other thing I’ll mention is that every time we made a Json call, we want a Json response and the way that we do that is, we have a dictionary. You can see here, this is my open batch response. And within that response, we have the ability to store that here at the post. Now, anytime I want to reference that response, say, I want to maybe do some debugging, or I want to use a value from that response. You can see here that now we can populate that dictionary with what we call a “Body” element of the Json. And that will give us the ability to see that Json in full from which we can then parse and do all sorts of fun things through the Json. So I’m a big fan of using the variables to your own benefit here using replacements and those sorts of things so that you can call the Web Service API and make life quite a bit easier on you.
The only other thing, I’ll mention here, just so you’re aware, there’s a couple of different ways that you can do it, but one of the ideas of integration with Automation Anywhere, is you’ve got to pass it a file that needs to be a base 64 image. And so what we’ve done in our demo here today is we have a little DLL that we’ve written. It’s like a three-line DLL. That’s very simple, but the purpose of this DLL is to take an image specifically a file. And take that file and create a base 64 string out of it, and then we store that string right here in this variable called “Base64Bytes”. And that’s what we will pass then to ABBYY, to ingest it and perform the extraction downstream. So a couple of hints there. There might be a couple of other ways that you may think how to do that, but this is actually a very simple way. It gives us a little bit of control and we actually wrote this DLL in .NET, but you have other. Python is out of the box with Automation Anywhere and of course you can implement other string extraction there to perform your file conversion over to base 64.
The only other thing I’ll mention here in this workflow. To wrap it up is we process the batch. And you can see here we are going to make sure our Json is pretty by giving it the information we need, so we have a couple of string replaces. We post it and then we do have a message box of the body here. That’s what makes this integration to ABBYY FlexiCapture so powerful is that it’s just Web Service Calls. It’s very clean. We can either use the software purely in the cloud. We can use it in a hybrid model or we can use it purely on premise. So the option here for web service interaction is really, really clean and a lot of different ways we can store the proper Json communications credentials within Automation Anywhere’s package to really make this a seamless interaction.
If you have any questions on this, please reach out. Thank you so much.
“Automation Anywhere” is a trademark/service mark or registered trademark/service mark of Automation Anywhere, Inc. in the United States and other countries.
Watch our video to learn how you can use the UFC FlexiCapture Health Monitor Tool to monitor and review the overall health of your ABBYY FlexiCapture System. Discover how to utilize the “Rules”, “API”, “Performance Counters”, and “Windows Services” functions as well.
Hello. Today I’d like to give you a tour through our ABBYY FlexiCapture Health Monitor Tool. This tool gives us the ability to keep track of our ABBYY FlexiCapture system to ensure that it’s always up and running, and it’s always ready to tackle new documents and batches that are incoming for us to process. This tool essentially gives us the ability to look at a number of different things that we would consider the health of an ABBYY FlexiCapture system by creating what we call “Rules”. And these “Rules” are defined as “API” rules, some service rules, looking at “Performance Counters” and looking at “Windows Services”. So for example, we have the ability to monitor the ABBYY FlexiCapture API, and we can perform different types of API tests on a periodic basis. This is actually one of those critical ways to tell if a system is up and running accurately is by just performing an API call periodically. And that’s actually what we have configured here.
So we can establish a local connection or a connection to the application server. And we can ask the software to perform an API test and monitor this every five minutes. And really what happens here is that this is a very basic call. We call ABBYY through the Web Service API and we’re expecting a session and potentially even some projects to be returned in a very efficient meaning of time. And if that fails, then of course we can log that.
We have other types of rules, including looking at the ABBYY FlexiCapture service on the Processing Server. So we have the ability to kind of see how many cores we have utilized. How many are free. But another cool part about this, is that we can look at the Performance Counters. Behind the scenes in ABBYY FlexiCapture there are counters that tell us very critical things about the status of our application. So when we installed this Health Monitor on the Processing Server, we have access to those counters. So we can, at any point, see the number of free cores. How many cores we’re consuming. We can even see how many tasks are pending and so on and so on. And you can see here, we have quite a list of things we can analyze. The reality is is that at any point we may want to set a threshold. And within this tool, we have the ability to add new rules. So we can say, Hey, if this counter exceeds a certain threshold, and we’re going to pull that every five minutes, for example. So here, we’re going to say, check this counter every five minutes. And if the value exceeds or falls below, or is equal to a certain threshold, then we will monitor and log that as a rule failure.
And so that’s kind of the idea of this application is that we can look at these things that tell us about the health. We obviously can then check the status of our Window Services to make sure that the Processing Server and Processing Station level, we are running efficiently. And at the end of the day, we have these rules that are checked. And when we want to know what happens in the status of these rules, we have logging that happens.
In many environments, we will want to log trace details and then rule violations. So as you can imagine with any application, there’s kind of always ongoing logging that happens. So we can tell the software what to do if we have traces, or if we just literally want to track rule violations. So we can tell the software to log that to a “Text File” or a “Windows Event”, or frankly, “None”. We have different styles of logging to tell us how verbose we want it to be. But we did create a kind of a “Production” default here that only logs typically rule violations to either a “Text File” or “Windows Event Log”, which allows you then to use other reporting and monitoring capabilities to trigger email notifications and things like that. Well, I hope this was a good video. If you have any questions on this, please reach out. Thank you so much.
Watch a video to discover how to create a hot folder in ABBYY FlexiCapture. Some of the key functions displayed in this demo include: creating an adequate UNC import path, creating a share, and using the Administration and Monitoring Console.
Hello. Today I’d like to show you how to create a hot folder within ABBYY FlexiCapture. The first thing I will do is go to the “Project Menu” and go to “Image Import Profiles”. I will hit new and I will provide the path of where I will import the document from now in the distributed version of the software. It’s very important that this is a share, whether it be a folder that is shared on the Application Server or a network share, but it needs to be a proper UNC path here. And also it’s very important that this share has given rights to the service account that you’ve installed ABBYY FlexiCapture under. So in other words, that service account needs to have the read-write ability to make modifications to that folder because the software will literally use this folder to import documents and then transfer them accordingly.
So once I provide the shared to the text box there I’ll hit next. We will tell the software how often to read from this directory. This is literally the number of seconds that it will take every pull. So it’s going to pull by default every 20 seconds and of course you can modify that accordingly. You can see some other settings here. I’m not going to go into detail, but we can control what happens from a batch perspective. Do we grab multiple items per batch? Or maybe we want one document to be imported per batch. So you have a lot of that control here from the “Image Import Profile” screen.
The next tells us what we want to do with the document. How do we create the document? Do we want to use an “image enhancement profile”? Do we need to keep a searchable text layer on the document if there is one? So we can make some of those decisions here or leave it out of the box. And then lastly, what happens when a document comes in? Will the software delete them or do we want them moved to a subfolder that the software will create called “Processed” or “Exceptions”?
Next thing we will do is give it a name and I’m just going to leave this defaulted here and we’ll go with that from there. So now that I have a share created, you’ll see that within my share, so far I have nothing there. The reason I can tell is that the software will actually create some subfolders as documents come in and get processed. So what I typically do next is we need to make sure we open up our Administration and Monitoring Console, and that can be found from the “Start Menu” on the Application Server. And here it is. And what I’m going to do is I’m going to go ahead and open that. On the “Settings” screen of the Administration and Monitoring Console, you will see the projects that you have listed. This is the project here that I selected, and I’m just going to go ahead and put “Hot folders” and turn that on there. The next step, then, if all goes well and we’ve given the proper rights, as we will see a subfolder called “Processing Tasks”, get created here by the technology.
And “Processing Tasks”, a system folder that the software will use. And actually if I drop in a sample. I’ll just pick any sample. You’ll see the software will monitor this and it will actually automatically adjust it. I set mine by default to 20 seconds. So the software will pick up this document and you can see now it’s gone. And now it’s ingesting that document and following the proper workflow that I have outlined for that image import profile. So you’ll actually see the software create other subfolders. So if you remember by default, what happens is the software, if it processes an item successfully, will bring it in and it will move it to the “Processed” folder. So I’ll be able to see the batch that the software named it and the file there that was imported or files if we selected multiples. So there are some subfolders. And once again, those subfolders are found within that menu. So if you recall, we told the software to move imported profiles to the “Processed” folder so that we have a history of them. And if there were exceptions, we would have the ability for the software to create an “Exceptions” folder there as well.
So I hope you enjoyed this video of how to create a hot folder, what we call “Image Import Profiles” within the technology. Thank you so much.
Watch our video to learn how to create a Repeating Group on a Fixed Form in ABBYY FlexiCapture. Understanding how to create repeating groups on a fixed form is an important technique to grasp when exporting data. Other helpful tips such as creating instances and viewing the data in a tabular format are explained in this demo as well.
Hello. In this video, I would like to share with you how we can create a repeating group on a fixed form. The reason why we would do a repeating group is really the desired output. Typically, when we have information that repeats kind of like what you see on the screen, where we have four different sets of items that really at the end of the day, we want to maybe export to Excel. We actually want each item to be an independent row. We always think about the export and what we need the export to do when we’re designing our document definition. So in today’s video, we’re going to create a repeating group that will put the ID for each of these numbers for each of these rows here into a separate row in our table, or if an inner group, if you would.
So what we’re going to do is go ahead and outline each of these fields. So we’re going to say, Hey, I want this ID. I want this date. And I want this type. We’re going to be pretty generous here on where we draw these items. The next thing we’re going to do is we’re going to select them all and we’re going to right click and say, “Group”. What the software will do is it will then put this little border around the whole outside of this group. I’m going to go ahead just for ease of what we’re going to do next. I’m going to expand this just a little bit to kind of be the border of everywhere I want to do.
Now, if you can see here, what we’ve done is we’ve created a repeating group. Of course, we can do all sorts of cool things, naming the fields, setting data types and all of those sorts of things that we do when we kind of take the next step. But for this video, I want to explain to you how we can create instances of the repeating groups. So that means that we’re going to have multiple rows in our table if you would. So what I will do is I’m going to right click and say “New Instance”. When I do that, the cursor will change and give me the ability to draw a new group instance and to do that, I can simply just select here. You’ll see that it will automatically put each of those borders in for each of the fields. And you’ll see here that I have a new instance at my table here, and I will just keep doing that for instances three and four as well.
So now that we’ve had the groups selected, what I will do is I will come through here and just do a little bit of cleanup, making sure that I have the field selected that I wanted to, and we’ll go from there. Okay. So the next step that we typically do now that we have a new instances or instances are drawn, we can then come into our fields. And the cool part about having a group is that we only need to select the fields. And of course, once I change the property of a field, it will apply to each instance. So in this case, I’m just going to select them all. I’m going to go to “Properties” and I just want to make sure that I have my marking type corrected here. And just to test what the layout will be, I will go to “Testing” and run the test. What you’ll see is that now that I have a group, I have multiple instances in my group. When I export the data or I run production documents, I can now have an example of a spreadsheet or in a database, I would actually have multiple rows of each of these instances, which is quite important, typically, when we think about exporting or downstream integration.
Just for view, I’ll actually tell the software to show this as a table. And now this may make a little sense here. So now what you can see here is each of these fields, is really a tabular preview of what we’re going to see when we export the data. So this is what we call a repeating group. Otherwise, what we would do is have multiple separate fields, but in this case here, we’re actually getting a table or a group of data so that we can export this appropriately. So I just wanted to give you a quick hint and show you how we can do that, when we create a repeating group on a fixed form, which is not something we commonly do, but yet a technique that is pretty important when we’re talking about the way that we want to export the data. Thank you so much for watching this video.
Watch our video to learn how to create a Custom Export Script in ABBYY FlexiCapture. Custom export scripts allow you to perform various tasks some of which include exporting a document to a PDF archivable (PDF/A) format, creating database calls, web service calls, and other API calls. In this demo you will learn how to create a custom export script that exports a document to a PDF/A format.
Hello today, I’m going to show you how to create a custom export script. Now, there are times in the workflow of a FlexiCapture project that we want to do something custom, or maybe we want to call a web service API, or maybe we want to write to a custom database in a custom format that maybe isn’t supported out of the box. And the way that we do that within the solution is by using a custom export script. What we will do is we will open up the document definition via the project menu. We’ll edit it. And when that opens, we will go to the export settings at the top left. For today’s demo, we’re going to add a new export setting. The type in this case will be a Custom export (script). And we will just simply for cases of testing today, we will just say, the errors are irrelevant.
Now this is really something that you need to control based on your project, but not really the point of today’s demo, but you know, obviously set the setting to be the proper condition based on your project needs. You’ll see, the first thing then that happens is we get a script. This script has a couple of things that we have access to. We have access to a document object. We also have access to a processing object. Now for today’s demo, I’m going to kind of show you maybe a best practice and how to write this script. Now I’m going to show you this script, but I want you to realize that you can do whatever you want here. My script is going to export the document to a PDF archivable format, what we call a PDF/A format. You can write this script to do whatever you want. You can call a web service call. You can write your own custom database call. You can call other APIs, but that’s what we’re going to do today is just simply export the document to a PDF/A, which is actually supported out of the box, but for today’s demo, we’re going to script it.
So one of the things that I like to do is I always like to use formal and good programming to do such a thing. So typically what we want to do is we want some sort of try catch. So the reason why is we want the software to attempt to do something. And if there’s a failure, we’re going to allow the software to fail nicely. So I would write something like this.
And so that’s a very good way to kind of set up our script here. In this case, I’m going to do a couple of things. I’m going to just copy in some code just to prevent you from having to hear me type for a bunch of code here. And what we’re going to do is we’re going to export this document itself to a PDF/A. We’re going to give it a path and we’re going to move on. Now, a couple of other things that you should know is that not only do you have access to a document object and processing object, but you also have access to all sorts of cool things that we can do in the export settings. So in this case, I’m exporting to a PDF/A, as I’ve told you, and you can see, I have access to this export images saving options object here.
And if you would like to know more about some of these settings, if you go to your start menu and you go to the Developer’s Help, this will be able to tell you. So for example, the Developer’s Help is very, very strong within the solution. So if you need to know more about a specific object, all you gotta do is search for it. And you’ll be able to see all of the methods and properties available to you. Also, there’s a lot of options that are documented well online. So in this case, we’re going to try to export to a PDF/A. If that fails, then we want to do something smart with it. We want to be able to tell the software through the software’s methods, how to report an error if it fails and how to clean it up well. And what happens here is we have access to a processing object.
And so what we would do is we would do something like this. We would call it a processing object and we would report an error. When we report an error, this will tell us, and more specifically, the processing server will report an error for this task. That’s a very important piece that we’ll come back to here in a second. And maybe in this case, we would want to do something very cool. Like maybe, you know, tell the log that there was a processing error during export and report the error on here. Some other things that we would do as a best practice is, use this processing to do some additional debugging or to note when a process starts and stops. So within that processing object here, we may report an error or a warning or a message. It’s very common that we would use a message. And here’s an example of one where, you know, maybe we want to add a message in the log that says when the export was started and maybe when it was ended. And you can just take your imagination from there and start customizing any sort of messaging that you’d want for maybe even debugging purposes as part of your script.
Now, this would be a very common way that I would write a script. We would do a try catch. We would use the processing object to report messages to and from the processing server log. And we would access the document object in this case to do something intelligent. And you can see here, I’m saving this document as a PDF/A. So pretty standard. I’m going to export this document to this path via my export script. I will save this and I will finish up exporting the export settings there. We’ll go ahead and save our document definition. And then of course, like always we will publish it.
What I’m going to do now is actually run a document into the software. We’re going to open it up in a verification queue. And then what we’re going to do is we’re going to watch the export happen via the processing server application.
So I’m just going to go ahead and send up a document. Now, here at UFC, we have this neat little tool that allows you to right, click a file and send it to ABBYY FlexiCapture. That’s not the purpose of this demo, but if you’re interested in this tool, feel free to reach out to us. So I’m just going to tell the software what project to send this up to. We’re going to get that document into the solution. As you know, when we have a working batch in this case, it’s going to go to a verification queue when it’s ready for us to review. So therefore I will open up our Verification Station to review this document.
Now that we have the document in the queue, I’m going to simply just come in here. I’m not even going to look at the document. I’m going to close a task. And as you know, in a common workflow, that typically means we’re going to export this document. So I’m going to hit close task. And the important part that I want to show you is that from now on, we will have a task for that export and you can see here, it’s now been exported and I wanted to show you something. You may remember that I put in some processing report messages for debugging purposes that tell me when an export is started and stopped and ended. And those were the report messages via the processing object. Also this document, then succeeded and you can kind of see that here. I’m just actually going to show you the example file.
So if I open up my sample export. There, I have my test PDF as a PDF/A format. So once again, always use a very good scripting practice. This is a copy of the script here in notepad, but just remember you want to use, try catches cause you want to clean up errors as well. You also want to use the processing objects so that you report back to the software cleanly when there’s an error or when there’s messaging, or even in this case, there may be a warning instead of an error. And all of that is available to you via the scripting language. And please always remember to reference the help. The help is your friend, in this case. Use it. Get familiar with how to navigate it based on the methods or properties that we want to associate with our scripting. I hope you enjoyed this video. If you have any other questions, please reach out to us. Thank you so much.
Watch our video to learn how to set up Service Level Agreements in ABBYY FlexiCapture and the various elements involved in this process. Some of these elements include: queues, task counts, workflows, and time limits.
Hello. Today I’d like to discuss with you setting up service level agreements or what we refer to as SLAs within ABBYY FlexiCapture. Now the concept of an SLA in workflow management is that when we go into a queue, like what you see on my screen, certain items that have a higher priority or a higher SLA, or have an SLA that is close to even expiring would be worked prior to other traditional documents or documents with lower SLAs. And there’s a couple of things when we enable SLAs that happen to the interface of the product. First off, you get a “Process Warning Task Count” and an “Overdue Task Count”. The software, as items come close to approaching their SLA mark, they will be notified as a warning to an end user. And that’s what this count represents. And once a document or a batch reaches past the SLA amount, then we consider that an overdue document or batch. And hence why we call that an “Overdue Task Count”.
Now, at any point you can right click and explore the queue. And you can see here that I now have certain batches that have expiration dates on them and a status of expired. As they get closer to approaching an SLA, they may have different statuses of a warning status, but as they expire, the status is respectively updated. Now the concept is that once I get a task as an end user, so if I push my “Get Task” button, what I will do is I will receive a batch that has either an expired SLA or an SLA that is close to getting expired. So it controls really the round-robin approach of the queue. It gives us the ability to kind of reprioritize those SLA documents to a higher priority.
Now, in order to set this up, we go into our ABBYY FlexiCapture Project Setup Station, and we will update the workflow. When we get into a workflow, just make sure you remember that batch types have their own workflows. And what you’re currently looking at is my default workflow for the project. So you may need to enable this at a couple of different spots depending on your architecture of your project. But what I’ve done here is I’ve enabled processing time limits for each batch. And then I have now this button “Set Time Limit” that is available to us. When we click that button, you can see, we have certain settings of the SLA. We can tell the software to use a time limit, whether it be minutes, hours, or days. And to also issue a warning as documents get closer to reaching that SLA limit, we may want the software to trigger an automatic warning, or we may want to control that on our own by setting a static value for this warning.
The other option that we have then is the availability to set a time limit with a script, and this will open up our scripting engine within the solution. The interesting part about is that we can accommodate a lot of different business scenarios. So we can look up information in databases. We can call web services. We can set time limits based on business hours and maybe not just, you know, server time hours. So a lot of control that we get when we set this time limit with a script, but nonetheless, don’t forget that concept of this service level agreement or SLA is to change the priority of tasks or batches within the solution. Thank you so much for watching this video. If you have any questions, please reach out to us.
Watch our video to Discover how ABBYY FlexiCapture can use the tools of classification, field extraction, machine learning, and image enhancement to help you capture receipts.
Hello. Today I’d like to show you receipt capture within ABBYY FlexiCapture. Now this is really special technology because it really encompasses the breadth of what we can do within FlexiCapture and all the neat tools that we have. Some of those tools include things like classification, field extraction, machine learning, image enhancement. All of this comes into play when we do a receipt capture project. So on the screen, you can see I have 11 different receipts and the software has extracted an expense type, a total, a vendor and potentially a ton of other things here. I’ll actually show you some of the samples. On the left here is the information that we’re extracting out of the box. Now, of course, just like any FlexiCapture project you can add, remove, modify, but here the software has determined this is a gas station bill, and you can see the other types of information that it’s captured here.
Just kind of continue showing some of these other samples. Here’s a toll bill. Here’s a hotel bill where the software has extracted things like line items as well. Here’s a gasoline bill. Now, sometimes when we look at receipts, it’s actually really helpful that the end user be able to see the original, because remember the technology will enhance the image so that it can read it and extract the best information that it can. But at any point, the end user has the ability to right click and see the original image. And this may be helpful just in context, as a human reading, a document, the original sometimes tells us something that maybe the other image performed by OCR doesn’t tell us. And you can kind of see some of those differences here. You can see the textured background. You can see the lighter text. And those sorts of things, where the software is enhancing that so that we get the best read of the document, but we don’t always sometimes as a human get the full context. So a lot of information here at our disposal.
I’ll just continue showing you some here. Here’s a restaurant bill. Here’s a retail bill. Here’s a parking bill. Toll road. We got a hotel bill as well. And you can see, this is the cool part about the technology is that it’s extracted all of this information literally out of the box and the next thing that applies is the machine learning. So at any point, if I need to teach the software or redirect the software to look for a specific field or fix it, the software will remember those changes so that the next time I process a bill or similar bill, the software will be able to extract that information for us. So that really, really helps receipt capture type of projects become more and more intelligent for us.
Then lastly, I’ll show you here just a car rental. You can see here, it’s not the prettiest image. But you can see here, we got most of that information extracted off of that bill. So receipt capture is really a hard thing for most technologies to do because most technologies really center on only one piece of technology, but ABBYY’s machine learning, image enhancement, classification and field extraction technology combined into one project is really shown very well within this receipt capture type of project. Hope you enjoyed this video. Thank you so much.