Auto Review is a tool that performs first-pass responsiveness and issue review, incorporating large language model (LLM) technology to evaluate the text of documents based on your instructions (or tag descriptions). Like a human reviewer tagging and leaving a comment, Auto Review uses generative AI to suggest tags and provide reasoning for its suggestions.
Enabling Auto Review
Auto Review must be enabled in your database. If you wish to enable Auto Review, please contact your CSM, DISCO Project Manager, DISCO Account Executive, or the DISCO Desk (discodesk@csdisco.com) to obtain pricing and workflow information.
When Auto Review has been enabled, you can access it by clicking the icon below.
Key Auto Review Terms
- Suggestion – This is a Yes/No field that indicates whether Auto Review suggests a tag for a document based on the tag description provided.
- Suggestion Reason – This is the rationale Auto Review provides for its tagging suggestions.
- Case Background – This is overall information about the case that is relevant for all tags, like parties, claims, and common terms.
- Tag Description – These are the instructions you give Auto Review detailing when to apply a tag and what information to consider.
- Failed Document – This is a document that was unable to be reviewed by Auto Review. Failed documents require manual review.
- Skipped Document – This is a document that could not be reviewed due to a lack of text. Skipped documents are tracked separately from failed documents, but still require manual review.
Running an Auto Review job
To run Auto Review, you will need to define your review universe, enter and refine your case background and tag descriptions, and kick off the review job.
Define your review universe
First, create a folder containing only the documents to be reviewed. Select that folder using the folder browser located to the left of the search bar (below), ensuring the Include documents in subfolders toggle is off.
Enter your case background and tag descriptions
Next, enter your case background and tag descriptions. This is what gives Auto Review the context it needs to understand the matter and evaluate each document. Entering and iterating the tag descriptions is a way of training Auto Review, like drafting a protocol and providing feedback to a human reviewer. (See Auto Review Best Practices for tips on drafting case background and tag description entries.)
To get started, open the Auto Review sidebar and enter a unique name for your review. Then enter your case background and select up to 10 tags to review. The tag selector only populates existing tags, so you will need to create any new tags before this step. Once you are done selecting tags, enter tag descriptions for each tag. There is a 15,000 character aggregate limit for case background and tag descriptions per review job.
If you have already run Auto Review, the case background, tags, and the tag descriptions will be auto-filled from the previous run. You can update any pre-filled tags or descriptions as needed.
Run your review
Finally, click the Review X documents button at the top of the Auto Review sidebar. To prevent you incurring costs from misclicks, a pop-up will appear asking you to confirm that you wish to run the Auto Review job. When the Pending count reaches 0 and the status icon changes from In Progress to Complete, all documents have been reviewed. If you wish to cancel an in progress job, you can do so by clicking Stop this review. You will be billed only for the documents that Auto Review reviewed (not Failed or Skipped documents).
If you are unable to run Auto Review, click on the red error indicator to find out more information. Reasons why this may be occurring include:
- The documents to be reviewed are in ECA;
- No folder has been selected;
- No tags have been selected;
- No tag descriptions have been entered; or
- You do not have database permissions to run Auto Review.
Accessing Auto Review results
DISCO stores all tag suggestions (including for historical runs) and the most recent suggestion reasons. DISCO does not maintain suggestions or suggestion reasons for deleted tags.
You can access tag suggestions and suggestion reasons from the document viewer, from a document list, and via search. To allay any confusion while reviewing, historical suggestions are only accessible through search syntax and not available through the viewer or document list.
From the document viewer
Generally, you can see Auto Review's most recent suggestions and suggestion reasons by looking for the star icon while reviewing a document. If the icon is green, Auto Review has suggested the tag; if it is gray, Auto Review did not suggest the tag; and if it is not present, Auto Review did not evaluate the tag. You can view the suggestion reason by hovering over the icon.
In review stage batches, Auto Review results operate similar to tag predictions, showing both suggested and not suggested tags. In the below example, Raptor was positively suggested (green icons), while California Energy Market and LJM were reviewed but not suggested (gray icon).
Outside of review stage batches, the panel is slightly different based on whether a human reviewer applied a given tag. Tags applied by a human – including those that were also evaluated by Auto Review – appear in the Tags box. Auto Review suggested tags that were not applied by a human appear under the Auto Review Suggestions box. To reduce clutter, Auto Review Suggestions only displays positively-suggested tags; tags that were evaluated but not suggested can be viewed by clicking into the Tags box.
From the document list
Auto Review's most recent suggestions can also be added to custom views in your document list by selecting the fields Suggested as likely and Suggested as unlikely under Work Product/Tags (below).
Once you’ve added the columns to your view, Auto Review results will appear in the columns and bear the green or gray star icons, with suggestions accessible by hovering over the tag. If a human reviewer has selected a tag, it will have a filled-in background to assist in visual tracking of disagreements with Auto Review.
You can also export these fields to an XLSX file via DISCO's document list export feature. The resulting export will have a column containing the tag name, the suggestion (either yes or no), and the suggestion reason, separated by semicolons. If multiple tags have been Auto Reviewed for a document, double semicolons will separate each tag grouping.
Via searching
Finally, you can find all tag suggestions via searching, either by using the links embedded in the Auto Review side panel or by leveraging search syntax.
To use the links, open the Overview tab in the Auto Review side bar to bring up the list of all tags Auto Reviewed for each job. Each tag shows the tally of documents suggested during that job. Clicking that number will add a search to the search bar with the exact syntax to find the set of documents.
To input the syntax directly, insert aitagdecision(“Tag Name”, Y, “job name”) into the search bar, where “Tag Name” is the name of the tag in quotes, Y or N is whether or not the tag was suggested, and “job name” is the unique identifier Auto Review creates for the job. For example, if you are looking for documents Auto Review suggested for the California Energy Market tag in the job number 2f2a7091-72da-42be-895c-fc4b22fea978, you would enter the syntax aitagdecision("California Energy Market", Y, 2f2a7091-72da-42be-895c-fc4b22fea978). If you wish to find all documents for that tag, replace the Y with Y or N.
Accessing Auto Review metrics
Auto Review provides metrics in each job card within the Auto Review side bar, shown below. Each card details how many documents were submitted, start and end times, the person who ran the job, and the total run time. If some documents from an Auto Review job are later deleted from a database, this top level card will remain unchanged, but the searches and statistics related to that job will change to reflect the loss of those documents.
Clicking into a card reveals three tabs: Overview, Metrics, and Instructions, detailed below.
Overview
The Documents overview section contains search-clickable tallies for the documents that were successfully Reviewed, that Failed, and that were Skipped.
The Documents by tag section contains clickable tallies for the tags suggested based on the descriptions used in that job.
Metrics
The Metrics tab allows you to access overall review metrics and compare the suggestions provided by Auto Review to tags that have been applied by human reviewers.
The comparison dropdown defaults to using All documents, and can be switched to use your current Search criteria, as shown below.
When using the All documents option, the comparison tool uses the full set of documents that were successfully reviewed in that specific Auto Review job. When using the Search criteria option, the comparison tool limits the set to the overlap between the documents that were successfully reviewed in that specific Auto Review job and the documents that are in your current search.
Overall metrics
The Overall metrics section provides the metrics for all of the tags that were Auto Reviewed in that job. Values that are averages are calculated by adding the value from each tag and dividing by the number of tags . Detailed definitions of each metric are below; you can also get information about each of the metrics by hovering over the Info icon next to the metric.
- Agreement rate – This is the percentage of Auto Review’s decisions that agree with your team's decisions. This is an average value from all of the tags reviewed in this job
- Comparison – This is the amount of documents that were reviewed by this Auto Review job that are also present in the set of documents being compared. This amount will always be 100% Documents to compare against is set to All documents.
- Prevalence (user) – The percentage of documents for which a human reviewer believes at least one tag should apply. This is NOT an average value.
- Prevalence (Auto Review) – The percentage of documents for which Auto Review believes at least one tag should apply. This is NOT an average value.
- Precision – This is the rate at which Auto Review was correct when it positively suggested a tag. Accuracy is determined by tracking the human application of the suggested tag. This is an average value from all of the tags reviewed in this job.
- Recall – This is the rate at which Auto Review suggested a tag present on a document. This is an average value from all of the tags reviewed in this job.
Agreement rate by tag
The Agreement rate by tag section allows you to drill down into the metrics for each tag in a job. This section provides an error matrix, as well as five percentages useful for understanding and defending the effectiveness of your review.
The error matrix is the two-by-two grid comparison chart of Auto Review's suggestions versus the currently-applied tagging. All of the values within the error matrix are links, allowing you to review the agreements and disagreements between Auto Review and human reviewers. Reviewing disagreements is particularly helpful in guiding you in honing your tag descriptions or adjusting your review process.
This section also provides five percentages that are standard metrics for evaluating the accuracy of a review, detailed further below.
- Prevalence (user) – This is the percentage of documents for which a human reviewer applied the tag divided by the total number of documents.
- Prevalence (Auto Review) – This is the percentage of documents for which Auto Review suggested the tag divided by the total number of documents.
- Precision – This measures how often Auto Review was correct for the documents in which it positively suggested the tag. This is calculated by dividing the agreement value in the top left (the true positives) by the sum of the top row (the true positives and the false positives).
- Recall – This measures how often Auto Review suggested the tag within the set of documents that have the tag. This is calculated by dividing the true positives in the top left by the sum of the left column (or true positives and false negatives).
- F-1 score – This is the harmonic mean of the precision and the recall. This is calculated by adding the reciprocals of the precision and the recall, dividing by 2, and then taking the reciprocal.
Instructions
The Instructions tab provides a record of the specific case background and tag descriptions that were used for that job. This allows you to compare the instructions used in your iteration process.
Managing Auto Review permissions
Admin-level users can manage permissions for both running Auto Review and accessing Auto Review results through DISCO’s user role interface, with additional limitations available in review stages.
Managing access via user roles
To update permissions for a user role, first select the role through Menu/Review Team. At a baseline level, users must have at least View access to all tags within the database to use or view results from Auto Review. Specific Auto Review permissions are located under Explore/Auto Review and Document Viewer/View Tag Suggestions (shown below).
Explore/Auto Review
- Unchecked – The user will not have access to the Auto Review side bar The Restricted Reviewer role defaults to this option.
- View – The user will have access to view the Auto Review side bar – including job cards, results, and metrics – but will not be able to run Auto Review. The Reviewer role defaults to this option.
- Manage – The user has full access to Auto Review, including the ability to run a new job. The Admin role defaults to this option.
Document Viewer/View Tag Suggestions
- Unchecked – The user cannot see indicators for suggestions or suggestion reasons within the document viewer.
- Checked – The user can see such indicators. DISCO’s standard roles default to this option.
Limiting view access in the review panel
You can also limit a reviewer’s ability to see tag suggestions in review stage batches when setting up a new stage, similar to limiting predicted tag scores. In the Preferences section of the Edit review decisions page, set toggle off Suggested tag details.