Summarize

Takes a document (raw text or URL) and returns a summary of the text.

AI21 Studio's /summarize API takes a piece of text or fetches text from a given URL and generates grounded summaries that remain faithful to the original document (i.e. no external information is added during the process). The summaries are formatted as bullet lists, following the original text flow. For the full API functionality see the API Reference.

Don’t just take our word for it, give it a go! 💫

Features

In summary

Summaries are returned as a single string, containing bullet points representing the key points of your text.

The input text should contain at least 40 words and no more than 50,000 characters. This translates to roughly 10,000 words, or an impressive 40 pages! When it comes to URLs, this limitation only applies for summarizable text (after parsing, preprocessing, etc.).

Different types

In addition to working with free text, this API can also work directly with your favorite (or least favorite) webpage URLs! No need to spend time and effort scraping text yourself - just input the required URL and let the summarization begin.

Note: if the webpage you are trying to summarize is behind a paywall or restricted access, your call will fail and will result in an error.

Examples

With just a few lines of code, you can provide your users with high quality summaries. Whether they are students who need to perform a massive literature review, journalists who need to sift through large amounts of information, or your average Joe who just needs to get the gist of the long report they have to read. Following are some examples illustrating both use cases and special features you can take advantage of with this API.

Summarization of news

Keeping up with world events is important, but with the current information overload, who has the time?
With our Summarize API, you can provide your users with the gist of various news articles, letting them choose what they want to read more about.
As an example, below is a summary of a single news item.

Original textThe error affected a number of international flights leaving the terminal on Wednesday, with some airlines urging passengers to travel only with hand luggage.
Virgin Atlantic said all airlines flying out of the terminal had been affected.
Passengers have been warned it may be days before they are reunited with luggage.
An airport spokesperson apologised and said the fault had now been fixed.
Virgin Atlantic said it would ensure all bags were sent out as soon as possible.
It added customers should retain receipts for anything they had bought and make a claim to be reimbursed.
Passengers, who were informed by e-mail of the problem, took to social media to vent their frustrations.
One branded the situation "ludicrous" and said he was only told 12 hours before his flight.
The airport said it could not confirm what the problem was, what had caused it or how many people had been affected.
SummaryA number of international flights were affected by the error on Wednesday, with some airlines urging passengers to travel only with hand luggage. Passengers were informed by e-mail of the problem and took to social media to vent their frustrations.

Summarization of academic article

The number of papers coming out every year makes keeping track difficult. Our Summarize API lets your users stay up-to-date with current academic advancements, so that they can focus on their work and not spend all their time reading research papers.

As an example, below is a summary of the landmark paper Attention Is All You Need:

Original textThe dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 Englishto-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data
SummaryThe Transformer is a new simple network architecture based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. It achieves superior results on two machine translation tasks, including a new single-model state-of-the-art score of 41.8 BLEU.