Segmentation or the sectioning of areas of interest in an image – in this case cells or parts in cells – can be achieved through computer vision techniques. Two approaches are mentioned in the article: (1) model-based and (2) machine learning.

Model-based approaches, like thresholding, declumping, and watersheding, are most common and are found in CellProfiler and ImageJ; and perform well for fluorescence microscopy images of cultured cells. However, these approaches require manual parameter adjustment for each new experimental setup. Machine-learning-based segmentation (for example, in Ilastik) is suppose to perform better on difficult segmentation tasks, such as highly variable cell types or tissues.

Let’s see what open-source machine learning approaches offer. We’ll save the model-based or classical approach for the reader to independently explore.

ilastik's Pixel-based Classification and Segmentation

ilastik is a rich program useful for segmentation, feature extraction, and object classification! With pixel-based classification one can train a machine to identify cells of interest. A post on CellProfiler’s blog, describing use of ilastik to segment cells, was good for understanding how to segment images.

Feature selection used in ML training

You can chose to apply different convolution filters at different kernel sizes. I selected all features available, then trimmed the list to features that best highlighted, traced, or distinguished cells from the background. These features are part of the pixel-based training and classification.

When training the pixel-based classifier, the leveling tool, helped to capture all portions of the cell body. In the image below the cursor is on the leveling tool button. In my case it was a difficult to completely exclude small areas around the cell nucleus that were not positively labeled. Thus, further refinement through size exclusion and thresholding helped exclude bright small areas and debris.

After training on a few (10+) images I processed (500+) batch of images:

Note that in Object Classification you will need at least two classifications or labels, to move onto Object Information Export. I manually labeled objects to train the classifier. However, exporting produced an error that prevented image saving. Therefore, I used the Pixel Classification module and exported segmentation masks (binary images) for each image.

Training Pixel-based Classifier

Leveling to capture more of the cell body

My images are spread out in multiple folders. Unix wild cards can be used to process multiple images at once from ilastik’s command line interface. In my case I have images organized by experiment date, cell type (primary antibody), and fluroescent label. The images that I am interested contain “CY3” in the file name. From the command line I was able to process images spread out in many folders and subfolders. Here is the code I used to apply the trained pixel-based simple segmentation procedure, to create image masks for later use.


The bracketed variables are used in naming the output files that are generated.


Next up, let’s see how masks created in ilastik can be used to crop cells and save cropping in separate files.