Lineart vs Greyscale scans for text

When faced with creating artwork a black and white book from a previous print that has no digital artwork available, the fastest and cheapest option is to scan the book. This assumes that there are no changes to the text, that the book is for print only, and that the client has allowed the spine to be cut off of the book so that the pages can scan through a document feeder. If the book is text only with no halftone, I would recommend the scans end up as 1200 dpi linearts. This is fine if the book is text only, but if the text contains images such as photographs, then two scans are required – one for the text (linearts), and one for the images (300dpi greyscales). The two sets of scans then have to be combined by placing the images into InDesign.
One might ask “why not use greyscale scans at 1200dpi”? Apart from the filesize when printing – the text will look terrible. To understand why high resolution doesn’t always equal better quality, the answer lies in one process: Rasterising.
Regardless whether a greyscale scan is 300dpi, 600dpi or 1200dpi, the scan will still have to go through the rasterising stage on the copier where a filter is applied to make the shades of grey that the artwork may contain. This is usually done with a halftone filter in the copier’s RIP software. This is where the images are converted into halftone dots – measured in LPI – Lines (of dots) Per Inch.
nosharper1Using a 150lpi halftone filter, here is what happens when the same images are rasterised (click on the image to see at correct size):
nosharper2The 600dpi and 1200dpi images do look better than the 300dpi, but the type is still not sharp, and looks bumpy. This is because of the halftoning that is occurring in the RIP. Despite the resolution the text was scanned, the bumps on each image are in the same spots in each scan (though the severity of the “bumps” is different with the dpi)
Lineart images are different in that there are no shades of grey, and do not have halftone applied to them. So once passing through a 150lpi halftone filter, here is how the images look after being rasterised (click on the image to see at correct size):
nosharper3Here, the difference in dpi does matter, as the 1200dpi lineart is sharper than the 600dpi lineart, which is definitel­y sharper than the 300dpi lineart
Here is the side-by-side comparison (click on the image to see at correct size):
nosharper4So by creating the book with separate scans for the images and type, the quality will be greatly improved, but will take longer to set up.

Advertisements

Lineart “Spicks and Specks” remover for scanned text

splash

The task of removing the spots from a lineart scan is a boring task, but a necessary one when trying to create an identical copy of previously printed text.

The usual way of minimising the clean-up of rogue dots is to scan the original as a 1200 dpi greyscale, and using combinations of levels and curves to remove the highlights and emphasise the shadows, then convert the greyscale to lineart using the 50% Threshold. Nevertheless, sometimes there are stubborn dots that won’t go away with this process. Also, scanning hundreds of images at 1200 dpi in greyscale (so the images can be 1:1 converted to lineart using the 50% Threshold) requires lots of memory and hard drive space, so in this brief the pages were scanned as lineart images.

One particular brief was to recreate a novel exactly as it had been printed previously, but as the book was last printed 15 years ago, the native files were no longer available. While the cover could be re-set, the black and white text had to be scanned in, rather than use Optical Character Recognition (OCR) and format the text into a new InDesign (this would take too long and require many proofs). The client gave permission to cut the cover from the book so that the text could be fed through the scanner’s Auto Document Feeder (ADF).

A script was already produced that would remove rogue dots –  but sadly did not work with the latest versions of Photoshop. An answer turned up in a post to the Adobe Forums when Evgeny Trefilov (17th post in) presented a filter he had made. Initially, it too did not work with the latest version of Photoshop, but a 64-bit plug-in was created to work with CS6 and above. Evgeny’s plugin does require the images to be greyscale.

lieartfixsettings

Above: the user interface of Evgeny’s plug-in. Set the Threshold, Max Value, Block Size and C value as above, but to fine-tune the script so that dots don’t disappear above “i”s or letters don’t fill in, adjust the two red sliders until the desired results are achieved.

For this brief, a sample file  was used (out of the many that were scanned) to test Evgeny’s filter, and once refined to make sure that only the rogue dots were removed but other larger dots preserved (e.g. the dot in the letter “i” or a full-stop), an action was made in photoshop to:

  • Convert from lineart to greyscale;
  • Run Evgeny’s plug-in;
  • Convert from greyscale back to lineart;
  • Save and close.

This plugin (download here) and action worked well and certainly saved dozens of hours of removing rogue dots from lineart scans of text.

It should be stressed that this plugin was appropriate for scanned lineart text, but as for cleaning up illustrations, diagrams or photographs, be careful as the incorrect settings can have major consequences.

So correcting the scanned pages is one thing, while one of the techniques from this article was used to place the pictures into a new InDesign file.

Live Text Masking in InDesign

Until now, to make a text-mask with InDesign, type normally has to be converted to curves so that the type can now be treated like a placeholder. This works fine until the type has to change, such as:

  • Correcting a typographical error;
  • Including more text or resizing it; or
  • During a data merge.

In these instances, it would be ideal for the type to still be live so that changes could be made while maintaining the masking of the type.

Surely someone had thought of this before… but instead of being an easy search in Google, it took hours of research to find this little nugget of information from the Phoenix InDesign Users Group:

Excited, I followed the instructions to the letter, but discovered that this trick isn’t true text masking. Let me explain.

A text mask created the usual way of converting to curves and then placing the image within the new shape works this way:

However, a text mask created using the tutorial from the Users Group behaves more like a stencil. That is, it does mask the image, but shows the stencil:

This is fine if the background is white… but in this instance the background is pale yellow. The solution in this instance is to make the image to be masked sent to the background, and the image which is in the background become the foreground:

So while this technique works, it does not work as well as masking within shapes, given that:

  • not all effects (drop shadows, bevels, transparencies) can be applied to the masked text; and
  • the background effectively is brought forward; and the item to be masked is effectively in the background.

This technique also works with background images. To demonstrate this, I’ve upgraded an earlier post featuring “Square Pegs Round Holes” to demonstrate how this masking works with live text.

As usual, files for the above demonstration can be found here.

Keeping a “Fit” Image, or making an Image Fit?

Adobe InDesign’s frame fitting options can be quite useful, but with ten possible combinations it can be difficult to remember what fitting option to use.

Rather than use trial and error, why not refer to a chart which I’ve made especially for this purpose?

The downloadable PDF version for print is available from this link.

%d bloggers like this: