We present a new single-chip texture classifier based on the Cellular Neural Network (CNN) architecture. Exploiting the dynamics of a locally interconnected 2-D cell array of CNNs we have developed a theoretically new method for texture classification and segmentation. This technique differs from other convolution-based feature extraction methods since we utilize feedback convolution, and we use a genetic learning algorithm to determine the optimal kernel matrices of the network. The CNN operators we have found for texture recognition may combine different early vision effects. We show how the kernel matrices can be derived from the state equations of the network for convolution/deconvolution and nonlinear effects.

The whole process includes histogram equalization of the textured images, filtering with the trained kernel matrices, and decision-making based on average gray-scale or texture energy of the filtered images. We present experimental results using digital CNN simulation with sensitivity analysis for noise, rotation, and scale. We also report a tested application performed on a programmable 22*20 CNN chip with optical inputs and an execution time of a few microseconds.

We have found that this CNN chip with a simple 3*3 CNN kernel can reliably classify 4 textures. Using more templates for decision-making, we believe that more textures can be separated and adequate texture segmentation (< 1\% error) can be achieved.

Input of 4 textures Segmentation by CNN using 4 kernels of size 3*3