Image Classification#

Satellite images often need to be classified (assigned to a fixed set of types) or to be used for detection of various features of interest. Here we will look at the classification case, using labelled satellite images from various categories from the UCMerced LandUse dataset. scikit-learn is useful for general numeric data types, but it doesn’t have significant support for working with images. Luckily, there are various deep-learning and convolutional-network libraries that do support images well, including Keras (backed by TensorFlow) as we will use here.

import intake

import numpy as np
import holoviews as hv
import pandas as pd
import random

from holoviews import opts
hv.extension('bokeh')

Get the classes and files#

All of the labeled image classification data is in a public bucket on s3 with a corresponding intake catalog. This catalog provides several ways to access the data. You can access one image by landuse and id, access all the images for a given landuse, or access all the images.

cat = intake.open_catalog('https://s3.amazonaws.com/earth-data/UCMerced_LandUse/catalog.yml')
list(cat)
['UCMerced_LandUse_all',
 'UCMerced_LandUse_by_landuse',
 'UCMerced_LandUse_by_image']

The first time you run the cell below it will download all the images which takes about 3 minutes on my machine. After that, the images are cached and it’ll take about 100 ms.

%time da = cat.UCMerced_LandUse_all().to_dask()
da
CPU times: user 610 ms, sys: 132 ms, total: 742 ms
Wall time: 868 ms
<xarray.DataArray 'imread-e27bb924cce898580cc2896641d3c189' (id: 100, landuse: 21, y: 256, x: 256, channel: 3)>
dask.array<transpose, shape=(100, 21, 256, 256, 3), dtype=uint8, chunksize=(1, 21, 256, 256, 3), chunktype=numpy.ndarray>
Coordinates:
  * y        (y) int64 0 1 2 3 4 5 6 7 8 ... 247 248 249 250 251 252 253 254 255
  * x        (x) int64 0 1 2 3 4 5 6 7 8 ... 247 248 249 250 251 252 253 254 255
  * channel  (channel) int64 0 1 2
  * id       (id) int64 0 1 2 3 4 5 6 7 8 9 10 ... 90 91 92 93 94 95 96 97 98 99
  * landuse  (landuse) object 'agricultural' 'airplane' ... 'tenniscourt'

We can see what’s going on more easily if we convert the data to a dataset with each data variable representing a different landuse.

ds = da.to_dataset(dim='landuse')
ds
<xarray.Dataset>
Dimensions:            (channel: 3, id: 100, x: 256, y: 256)
Coordinates:
  * y                  (y) int64 0 1 2 3 4 5 6 7 ... 249 250 251 252 253 254 255
  * x                  (x) int64 0 1 2 3 4 5 6 7 ... 249 250 251 252 253 254 255
  * channel            (channel) int64 0 1 2
  * id                 (id) int64 0 1 2 3 4 5 6 7 8 ... 92 93 94 95 96 97 98 99
Data variables:
    agricultural       (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    airplane           (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    baseballdiamond    (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    beach              (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    buildings          (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    chaparral          (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    denseresidential   (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    forest             (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    freeway            (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    golfcourse         (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    harbor             (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    intersection       (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    mediumresidential  (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    mobilehomepark     (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    overpass           (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    parkinglot         (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    river              (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    runway             (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    sparseresidential  (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    storagetanks       (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>
    tenniscourt        (id, y, x, channel) uint8 dask.array<chunksize=(1, 256, 256, 3), meta=np.ndarray>

Split files into train and test sets#

In order to accurately test the performance of the classifier we are building we will split the data into training and test sets with an 80/20 split. We randomly sample the images in each category and assign them either to the training or test set:

train_set = np.random.choice(ds.id, 80, False)
test_set = np.setdiff1d(ds.id, train_set)

Define function to sample from train or test set#

landuses = da.landuse.data
landuse_list = list(landuses)

Next we define a function that randomly samples an image, either from the training or test set and ant

def get_sample(landuse=None, set='training'):
    landuse = landuse or np.random.choice(landuses)
    i = random.choice(train_set if set == 'training' else train_set)
    return ds[landuse].sel(id=i)

def plot(data):
    options = opts.RGB(xaxis=None, yaxis=None)
    title = '{}: {}'.format(data.name, data.id.item())
    plot = hv.RGB(data.data.compute())
    return plot.options(options).relabel(title)

We can inspect the data on one of these samples to see that the data is loaded as an xarray.DataArray.

data = get_sample()
data
<xarray.DataArray 'agricultural' (y: 256, x: 256, channel: 3)>
dask.array<getitem, shape=(256, 256, 3), dtype=uint8, chunksize=(256, 256, 3), chunktype=numpy.ndarray>
Coordinates:
  * y        (y) int64 0 1 2 3 4 5 6 7 8 ... 247 248 249 250 251 252 253 254 255
  * x        (x) int64 0 1 2 3 4 5 6 7 8 ... 247 248 249 250 251 252 253 254 255
  * channel  (channel) int64 0 1 2
    id       int64 13

We can plot this array as a holoviews RGB image so we can visualize it:

plot(data)
hv.Layout(list(map(plot, map(get_sample, np.random.choice(landuses, 4))))).cols(2)

Define the model#

Now it’s time to define a model. The code snippet below defines a convolutional neural network containing a stack of 3 convolution layers with a ReLU activation and followed by max-pooling layers. This is very similar to the architectures that Yann LeCun advocated in the 1990s for image classification (with the exception of ReLU).

from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense

size = (150, 150)

model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(*size, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())  # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(21))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
Using TensorFlow backend.

Declare the data#

The dataset of images we have is relatively small so to avoid overfitting we will want to apply some augmentation to it. The code below defines a generator that randomly samples from either the training or test set and generates a randomly rotated and cropped 150x150 window onto the 256x256 images along with a label array:

def get_array(array, size=(150, 150)):
    # Randomly flip
    if random.getrandbits(1):               # equivalent to random.choice([True, False]) but much faster
        array = array.transpose(1, 0, 2)
    if random.getrandbits(1):
        array = array[::-1]
    if random.getrandbits(1):
        array = array[:, ::-1]
    # Randomly crop
    sh, sw = size
    h, w = array.shape[:2]
    b = np.random.randint(h-sh)
    l = np.random.randint(w-sw)
    array = array[b:b+sh, l:l+sw]
    return array/255.

# set up an mapping to an identity matrix to use for one-hot encoding
one_hot_mapping = dict(zip(landuses, np.eye(21)))

def gen_samples(set='training', labels=None):
    "Generates random arrays along with landuse labels"
    while True:
        choice = get_sample(set=set)
        if labels is not None:
            labels.append(choice.name)
        one_hot = one_hot_mapping[choice.name]
        data = choice.data.compute()
        yield get_array(data)[np.newaxis, :], one_hot[np.newaxis, :]        

Run the model#

Before we start running the model let’s set up a keras Callback to build a dashboard to monitor the accuracy and loss during training:

import time
from keras.callbacks import Callback

class MonitorCallback(Callback):
    """
    Builds a streaming dashboard to monitor the accuracy
    and loss during training using HoloViews streams.
    """
    
    _format = '%s - Epoch: %d - Elapsed time: %.2fs'
    
    def __init__(self, metrics=['acc', 'loss']):
        super().__init__()
        sample = {'Epoch': np.array([])}
        for metric in metrics:
            sample[metric] = np.array([])
        self.buffer = hv.streams.Buffer(sample)
        dmaps = []
        for metric in metrics:
            def cb(data, metric=metric):
                return hv.Curve(
                    data, 'Epoch', metric, label=self._format
                    % (metric, self.epoch, self.elapsed_time))
            dmap = hv.DynamicMap(cb, streams=[self.buffer])
            dmaps.append(dmap)
        self.layout = hv.Layout(dmaps)
        self.metrics = metrics
        self.start_time = None
        self.epoch = 0

    def on_train_begin(self, logs={}):
        self.start_time = time.time()
        
    @property
    def elapsed_time(self):
        if self.start_time is None:
            return 0
        else:
            return time.time() - self.start_time

    def on_epoch_end(self, epoch, logs=None):
        self.epoch += 1
        data = {'Epoch': [self.epoch]}
        for metric in self.metrics:
            data[metric] = [logs.get(metric)]
        self.buffer.send(data)

Now we can create an instance of the callback and display the dashboard, which will at first appear blank.

monitor = MonitorCallback()
monitor.layout.options('Curve', width=450)

Next we will fit the model with our training data generator, as the model is running it will update the dashboard above:

%%time
history = model.fit_generator(gen_samples('training'), steps_per_epoch=50, epochs=500, verbose=False, callbacks=[monitor])
CPU times: user 2h 28min 56s, sys: 3h 27min 33s, total: 5h 56min 30s
Wall time: 53min 47s

Evaluate the model#

First we will have a look at the monitoring output but smooth it slightly so we can make out the overall trend:

from holoviews.operation.timeseries import rolling
monitor.layout.options('Curve', width=400).map(rolling, hv.Curve)

Now let us test the predictions on the test set, first visually:

def set_title_color(color, *args):
    """Helper function to set title color"""
    args[0].handles['plot'].title.text_color = color
def get_prediction(cls):
    sample = get_sample(cls, 'test')
    array = get_array(sample.data.compute())[np.newaxis, ...]
    p = model.predict(array).argmax()
    p = landuses[p]
    return (plot(sample)
            .relabel('Predicted: %s - Actual: %s' % (p, cls))
            .options(hooks=[lambda *args: set_title_color('red' if p!=cls else 'blue', *args)]))

options = dict(fontsize={'title': '8pt'}, width=250, height=250)
hv.Layout([get_prediction(landuse).options(**options) for landuse in landuses]).cols(3)

And now numerically by running 1,000 predictions on the test set:

ntesting = 1000
labels = []
test_gen = gen_samples('test', labels)
prediction = model.predict_generator(test_gen, steps=ntesting)
y_pred = landuses[prediction.argmax(axis=1)]
y_true = np.array(labels[:ntesting])

accuracy = (y_pred==y_true).sum()/ntesting

print(f'Accuracy on test set {accuracy}')
Accuracy on test set 0.142

Next we can see how well the classifier performs on the different categories. To see how it performs we will make 50 predictions on each category and record both the accuracy and the predictions:

def predict(cls, iterations=50):
    accurate, predictions = [], []
    for i in range(iterations):
        sample = get_sample(cls, 'test')
        array = get_array(sample.data.compute())[np.newaxis, ...]
        p = model.predict(array).argmax()
        p = landuses[p]
        predictions.append(p)
        accurate.append(p == cls)
    return np.sum(accurate)/float(iterations), predictions

accuracies = [(c, *predict(c)) for c in landuses]

We can now break down the accuracy by landuse category:

df = pd.DataFrame(accuracies, columns=['landuse', 'accuracy', 'predictions'])

hv.Bars(df, 'landuse', 'accuracy', label='Accuracy by Landuse Category').options(
    width=700, xrotation=45, color='landuse', 
    cmap='Category20', show_legend=False)

Another interesting way of viewing this data is to look at which categories the classifier got confused on. We will count how many times the classifier classified one category as another category and visualize the result as a Chord graph where each edge is colored by the actual category. By clicking on a node we can reveal which other categories incorrectly identified an image as being of that category:

pdf = pd.DataFrame([(p, l) for (_, l, _, ps) in df.itertuples() for p in ps], columns=['Prediction', 'Actual'])
graph = pdf.groupby(['Prediction', 'Actual']).size().to_frame().reset_index()
confusion = graph.rename(columns={0: 'Count'})

hv.Chord(confusion).relabel('Confusion Graph').options(
    node_color='index', cmap='Category20', edge_color='Actual', labels='index',
    width=600, height=600)

Clicking on buildings, for instance, reveals a lot of confusion about overpasses, mediumresidential, and intersections, all of which do share visual features in common. Conversely, number of buildings were misidentified as parklots, which is also reasonable. As we saw in the bar chart above, forests on the other hand, have lots of edges leading back to itself, demonstrating the high accuracy observed for that category of images.

confusion.Count /= 50
hv.HeatMap(confusion, label='Confusion Matrix').sort().options(
    xrotation=45, width=500, height=500, cmap='blues', tools=['hover'], invert_yaxis=True, zlim=(0,1))