Training ARToolKit NFT to a new surface

From ARToolworks support library

Jump to: navigation, search

Main Page > ARToolKit NFT > Training ARToolKit NFT to a new surface

Contents

About surfaces we can use with ARToolKit NFT

ARToolKit NFT tracks from natural features of planar textured surfaces. The current implementation of the tracking algorithm requires that the visual appearance of the surface is known in advance. Thus, in advance we have to "train" the system to the appearance of a particular surface which we want to use for tracking. The output of this training is a set of data which can be used for realtime tracking in our application.

The following constraints apply to surfaces which can be used with ARToolKit NFT.

  • The surfaces to be tracked must be supplied as a rectangular image. Currently only jpeg images are recognised.
  • The surface must have a low degree of self-similarity i.e. it must be textured. Images with large areas of single flat color will not track well (if at all) since no unique features will be able to be identified in the interior of the areas of flat colour.
  • Larger or higher resolution images (more pixels) will allow the extraction of feature points at higher levels of detail, and thus will track better when the camera is closer to the image.

In the standard version of ARToolKit NFT, there is one additional requirement of the surface we want to track:

  • Images must have a fiducial ARToolKit marker either in the image, or around the outside of it,. There must be at least one in each image (optionally more than one). The marker(s) must be square, and must all have a black border lying and lie on a white background, or vice-versa. The marker(s) do not have to be any particular size.

At the time of writing, ARToolworks is testing an enhancement to ARToolKit NFT which removes the requirement to have a fiducial markers in or around the image.

Preparing an image for use as a surface

Summary: A typical workflow for producing NFT markers proceeds thus:

  1. A high-resolution image which is to form the basis of the marker is obtained. If the source texture is on paper, it must be scanned.
  2. If a fiducial marker is to be used, and if it must form part of the image (rather than being placed around the image), then using Adobe Photoshop or some other image editing application, a fiducial marker is placed into the digital image.
  3. The resulting image is saved in jpeg format, and fed into the NFT training application.
  4. If a fiducial marker has been placed into the digital image, then the image must be printed. Print on a good-quality colour printer, on low-gloss paper, to produce the final image which will be tracked. In this case, the original paper artwork is not used for tracking (because it does not contain the fiducial marker).

Producing a digital image of the marker surface

How big, and what resolution? During production of marker artwork, or scanning of pre-existing artwork, a natural question arises: how big (in pixels) should the marker image be? To answer this question, we must consider 3 inter-related factors.

  • How large do you want the physical marker surface to be (e.g. when printed) ? I.e., what dimension (in inches or millimetres).
  • How close to the camera will the printed marker be? This relates to the required resolution of the printed image. This is commonly expressed as pixels per inch or dots per inch (DPI). As a guide, most laser printers produce 300dpi black and white images, while colour printers usually use a dot-screen at 150 dpi (although they may advertise higher resolutions, almost all use a 150dpi resolution).
  • The product of the above two factors is the pixel dimension, which is the "size" of an image as reported in an image editing application on a computer. Dimension in pixels = physical dimension in inches x dots per inch. The width in pixels multiplied by the height in pixels is the total number of pixels in the image, commonly expressed in "megapixels".

So, first, consider how big the physical tracked image needs to be (in inches or millimetres). If the image is to be a page in a book, then the size of the pages will determine this factor. A common size might be A4 (210 mm x 297 mm) or US Letter (8.5 inches x 11 inches, or 215.9 mm x 279.4mm). Second, determine the desired resolution. 150 dpi is a starting point for many images.

With the second factor chosen, you will know how many pixels square the digital artwork or scanned artwork needs to be. For example, borderless A4 at 150dpi is 1240 pixels wide and 1754 pixels tall. Borderless US Letter at 150dpi is 1275 pixels wide and 1650 pixels tall.

Fiducial marker placement

Once the image is in digital form, a fiducial marker may be embedded. The marker(s) can have black or white borders. If using black borders, the marker must sit on an area of white or very light-coloured background. Add an extra white border around the black border if necessary. If using white borders, the marker must sit on an area of black or very dark coloured background. The inner half of the marker forms the unique portion, i.e. for a marker 80 mm wide, the inner 40 mm in both vertical and horizontal dimensions is the unique portion.

A black-bordered marker must sit on a white or near-white background. Where the background is not white, as in the example MagicLand3 image below, an extra area of white background must be added around the marker

The marker does not have to be inside the portion of the image which is used as the NFT surface. It can be outside it, as these two examples demonstrate:

Generating an ARToolKit NFT surface from a prepared image

Surface training uses a set of utilities included in the ARToolKit NFT package. These utilities must be run from the command line. On windows, this means you must open a “cmd” console and cd to the ARToolKitNFT\bin directory. On Unix systems (Linux and Mac OS X) open a terminal window and cd to the ARToolKitNFT/bin directory.

Deciding on the image set resolutions

Most of the training procedure proceeds without too many decisions being required, but the first step has the biggest decision, which is selecting the resolutions at which features of the image will be extracted. Generally, features are extracted at three or more resolutions to cope with the fact that dots in the image will appear at different resolution to the software depending on how close or far away the camera is from the image.

The default set of resolutions is 30, 60 and 90 dots per inch. This is adequare for some images, but for higher-resolution images, a larger range of resolutions is recommended. The MagicLand sample image uses 6 resolutions of 20, 40, 60, 80, 100 and 120 dpi. There is no value in using resolutions higher than the actual resolution of the final printed marker (i.e. it is not recommended to use resolutions higher than 150 dpi).

The utility program "checkResolution" can help with the decision of what values to use as minimum and maximum resolutions. It displays the expected resolution of the point on the tracked surface in the middle of the camera image.

After completing a training pass, it will pay to come back to the choice of image set resolutions and experiment with different minimum and maximum resolutions, and the number of resolutions. The choice depends greatly on the way in which you intend to use ARToolKit NFT for tracking, and your source images.

If you have further questions, it would pay to ask questions of the ARToolworks support staff, and/or other users of ARToolKit NFT, on the support forum.

First step: create an image set

In the first step, the source image is resampled at multiple resolutions, generating an image set (.iset) file.

Run genImageSet.exe providing the image as command line argument. E.g.: Windows: genImageSet.exe mycoolimage.jpg Linux / Mac OS X: ./genImageSet mycoolimage.jpg

You will be prompted for the resolutions you wish to use. See above.

Second step: training the features

In this step, the system trains itself to the features of the image at the various resolutions. This is the most time-consuming step in the process, and may take up to an hour for larger images with multiple resolutions. The output of this step is a set of featuremap (.fmap-xx) files.

Run genFeatureMap.exe providing the imageset as command line argument. E,g,: Windows: genFeatureMap.exe mycoolimage.iset Linux / Mac OS X: ./ genFeatureMap mycoolimage.iset

Third step: combine trained features into a set

In this step, a configuration file is generated combining the feature maps generated in step 2. The output of this step is a feature set (.fset) file.

Run genFeatureSet.exe providing the imageset as command line argument. E.g.: Windows: genFeatureSet.exe mycoolimage.iset Linux / Mac OS X: ./ genFeatureSet mycoolimage.iset

This application selects and saves good features for tracking. The result is saved in filename.fset. The output window displays features extracted from different image sizes. All selected features are shown inside red squares. Press space to view next image sizes.

Fourth step: train system to embedded fiducial marker

If using the version of ARToolKit NFT which does not require fiducial markers, skip this step.

If using the standard ARToolKit NFT, in which one or more fiducial markers is required, in this step the fiducial marker(s) you embedded in the image are recognised and trained. The output of this step is a marker file (.mrk) and one (or more) pattern files (.pat-xx).

Run genMarkerSet.exe providing the imageset as command line argument. E.g.: Windows: genMarkerSet.exe mycoolimage.iset Linux / Mac OS X: ./ genMarkerSet mycoolimage.iset

You will see two numbers, the first the number of candidate markers in the image, and the second the number of markers which pass a goodness test (making sure that the marker is square and with clean edges) and which are candidates for training. If the second number is 0, then no candidate markers were found – a result which will send you right back to the very start of the process .. creating an image with an embedded marker.

If you have got detected markers (thankfully usually the case!) then their detected positions will be displayed in a window, and you will be prompted to accept or reject each one. Type y and press return to accept a marker, and then enter a filename to save it under.

Fifth step: write a config.dat file for use with the simpleNFT example

The easiest way to test your trained NFT marker is using the simpleNFT example. This example expects a command-line argument specifying a configuration file. This file specifies the number of markers, their image sets, and a marker transformation.

The ‘#’ symbol is used for comment. The first line should be the number of textures to track. The second line is the path to the iset file. A matrix of image matrix to world matrix should be given at the end of the file.

A config file for one image set can be made in any text editor by copying the text below:

1
mycoolimage.iset
 1.0000  0.0000  0.0000  0.0000
 0.0000  1.0000  0.0000  0.0000
 0.0000  0.0000  1.0000  0.0000

This will use the image set, feature maps, feature set and markers you have just generated.

If you wish to move these files from the bin directory, be sure to edit the pathnames in config.dat and and marker files (.mrk files). Look at the MagicLand3 sample for an example.

Testing the generated ARToolKit NFT surface

Firstly, make sure you have a high-quality print of your surface (including any fiducial markers). The marker should be affixed to a surface that keeps it flat. If mounting in a book, surfaces should be printed on heavy card and mounted with a board-book binding, or a ring binding. If used as a separate surface, affix to some thin card with a dry glue (e.g. a glue stick).

Glue printed pages to a flat surface using a dry glue

The easiest means of testing NFT markers you train is to run them using the simpleNFT example program. Open a console window and change to the ARToolKitNFT bin directory.

Run simpleNFT.exe providing the config.dat file as command line argument. E.g.:

  • Windows: simpleNFT.exe config.dat (to use a just-generated marker)
  • Linux / Mac OS X: ./simpleNFT config.dat (to use a just-generated marker)
  • Windows: simpleNFT.exe Data/MagicLand3/config.dat (to use the MagicLand3 marker)
  • Linux / Mac OS X: ./simpleNFT Data/MagicLand3/config.dat (to use the MagicLand3 marker)

The tracking in this application is initialized by the appearance of a marker. Once a marker is detected, tracking is switched to feature based and marker is no longer necessary. Red 3D boxes are drawn on the images. If feature tracking failed, it is changed back to marker based tracking and yellow 3D boxes are drawn.

Moving on

Once you have generated a few marker sets, and seen the tracking response, you're ready to gain a deeper understanding of NFT tracking. You can read the reference documentation for more information.

Views
Personal tools