Skip to content

LSTM (Long Short-Term Memory) for Discharge Prediction

LSTM networks are a type of recurrent neural network (RNN) designed to learn patterns in sequential data (time series). In hydrology, LSTMs are often used to capture:

  • Seasonality (e.g., wet/dry season patterns)
  • Memory effects (discharge depends on previous months)
  • Delayed responses (rainfall → discharge lag)
  • Non-linear interactions between meteorology and discharge

This tutorial uses the repository dataset:

Dataset: Runoff_Data.csv

The file contains monthly records from 1990-01 to 2019-12 with these columns:

Column Meaning
Date Month (YYYY-MM)
Rainfall Monthly rainfall
Tmin Monthly minimum temperature
Tmax Monthly maximum temperature
Discharge Monthly discharge (target)

Intuition (with diagrams)

Overall flow

LSTM overview

Inside an LSTM cell

LSTM cell

What we will build

We train an LSTM to predict next month discharge using the previous lookback months of:

  • Rainfall, Tmin, Tmax, and past Discharge (as an input feature)

That means:

  • Input X[t] = rows [t-lookback, ..., t-1] (a sequence of months)
  • Target y[t] = discharge at time t (the “next” month relative to the input window)

Step 0: Environment

Recommended Python packages:

pip install numpy pandas scikit-learn matplotlib tensorflow

Step 1: Imports and reproducibility

The imports below cover: data loading (pandas), scaling (scikit-learn), modeling (tensorflow/keras), and plotting (matplotlib).

from __future__ import annotations

from pathlib import Path

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

from sklearn.preprocessing import StandardScaler

import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
from tensorflow.keras.callbacks import EarlyStopping

Set seeds for more repeatable results:

np.random.seed(42)
tf.random.set_seed(42)

Step 2: Load Runoff_Data.csv

DATA_PATH = Path("docs/assets/data/Runoff_Data.csv")
if not DATA_PATH.exists():
    DATA_PATH = Path("Runoff_Data.csv")

df = pd.read_csv(DATA_PATH)
df["Date"] = pd.to_datetime(df["Date"], format="%Y-%m")
df = df.sort_values("Date").set_index("Date")

print(df.shape)
df.head()

What this does (conceptually):

  • Reads the CSV into a time-indexed table.
  • Ensures data is in chronological order.
  • Keeps Date as the index so splitting by time is straightforward.

Step 3: Choose features, target, and lookback

An LSTM expects 3D input shaped:

  • (samples, timesteps, features)

Here:

  • timesteps = LOOKBACK (how many previous months per sample)
  • features = 4 (Rainfall, Tmin, Tmax, Discharge)
FEATURE_COLS = ["Rainfall", "Tmin", "Tmax", "Discharge"]
TARGET_COL = "Discharge"

Hyperparameters (the main things you tune)

LOOKBACK = 12  # (1)
LSTM_UNITS = 64  # (2)
DROPOUT = 0.20  # (3)
DENSE_UNITS = 32  # (4)
LR = 1e-3  # (5)

EPOCHS = 200  # (6)
BATCH_SIZE = 32  # (7)
PATIENCE = 20  # (8)
  1. Number of past months used to predict the next month (12 ≈ one full seasonal cycle).
  2. LSTM capacity (more units = more expressive, but higher overfitting risk).
  3. Regularization strength (fraction of activations dropped during training).
  4. Capacity of the post-LSTM dense layer that mixes learned temporal features.
  5. Learning rate for Adam optimizer (too high can diverge; too low can be slow).
  6. Maximum epochs; early stopping usually stops earlier.
  7. Mini-batch size per gradient update.
  8. Early stopping patience on validation loss.

Step 4: Convert the time series into supervised sequences

We create sliding windows so each training sample is a sequence of the previous LOOKBACK months.

def make_sequences(
    df: pd.DataFrame,
    feature_cols: list[str],
    target_col: str,
    lookback: int,
):
    values = df[feature_cols].to_numpy(dtype=np.float32)
    target = df[target_col].to_numpy(dtype=np.float32)
    dates = df.index.to_numpy()

    X_list: list[np.ndarray] = []
    y_list: list[float] = []
    y_dates: list[np.datetime64] = []

    for i in range(lookback, len(df)):
        X_list.append(values[i - lookback : i])
        y_list.append(float(target[i]))
        y_dates.append(dates[i])

    X = np.stack(X_list, axis=0)
    y = np.array(y_list, dtype=np.float32)
    y_dates = np.array(y_dates)
    return X, y, y_dates


X, y, y_dates = make_sequences(df, FEATURE_COLS, TARGET_COL, LOOKBACK)
print("X shape:", X.shape)
print("y shape:", y.shape)
print("First predicted month:", pd.to_datetime(y_dates[0]).date())

Key idea:

  • If your raw data has N months, you get N - LOOKBACK supervised samples.
  • Each X[i] is a (LOOKBACK, n_features) sequence and y[i] is a single discharge value.

Step 5: Train/validation/test split (time-aware)

For time series, do not shuffle randomly. Split in chronological order so the test set represents “future” months.

n = len(X)
train_end = int(0.70 * n)
val_end = int(0.85 * n)

X_train, y_train = X[:train_end], y[:train_end]
X_val, y_val = X[train_end:val_end], y[train_end:val_end]
X_test, y_test = X[val_end:], y[val_end:]

dates_train = y_dates[:train_end]
dates_val = y_dates[train_end:val_end]
dates_test = y_dates[val_end:]

Step 6: Scale features and the target (no data leakage)

LSTMs train more reliably when inputs are standardized. The important rule is:

  • Fit scalers only on training data, then transform validation/test with the same scalers.
feature_scaler = StandardScaler()
target_scaler = StandardScaler()

X_train_2d = X_train.reshape(-1, X_train.shape[-1])
X_val_2d = X_val.reshape(-1, X_val.shape[-1])
X_test_2d = X_test.reshape(-1, X_test.shape[-1])

X_train_scaled = feature_scaler.fit_transform(X_train_2d).reshape(X_train.shape)
X_val_scaled = feature_scaler.transform(X_val_2d).reshape(X_val.shape)
X_test_scaled = feature_scaler.transform(X_test_2d).reshape(X_test.shape)

y_train_scaled = target_scaler.fit_transform(y_train.reshape(-1, 1)).flatten()
y_val_scaled = target_scaler.transform(y_val.reshape(-1, 1)).flatten()
y_test_scaled = target_scaler.transform(y_test.reshape(-1, 1)).flatten()

Step 7: Build and compile the LSTM model

Architecture:

  • LSTM reads a (LOOKBACK, N_FEATURES) sequence and outputs a learned vector.
  • Dropout regularizes.
  • Dense layers map that vector to a single discharge prediction.
N_FEATURES = X_train_scaled.shape[-1]

model = Sequential(
    [
        LSTM(LSTM_UNITS, input_shape=(LOOKBACK, N_FEATURES)),
        Dropout(DROPOUT),
        Dense(DENSE_UNITS, activation="relu"),
        Dense(1),
    ]
)

model.compile(
    optimizer=tf.keras.optimizers.Adam(learning_rate=LR),
    loss="mse",
    metrics=["mae"],
)

model.summary()

Step 8: Train (with early stopping)

early_stop = EarlyStopping(
    monitor="val_loss",
    patience=PATIENCE,
    restore_best_weights=True,
)

history = model.fit(
    X_train_scaled,
    y_train_scaled,
    validation_data=(X_val_scaled, y_val_scaled),
    epochs=EPOCHS,
    batch_size=BATCH_SIZE,
    callbacks=[early_stop],
    verbose=1,
)

Plot learning curves:

plt.figure(figsize=(8, 4))
plt.plot(history.history["loss"], label="train")
plt.plot(history.history["val_loss"], label="val")
plt.title("Training vs validation loss")
plt.xlabel("Epoch")
plt.ylabel("MSE")
plt.grid(True, alpha=0.3)
plt.legend()
plt.show()

Step 9: Predict and inverse-transform to original units

y_pred_test_scaled = model.predict(X_test_scaled).reshape(-1)
y_pred_test = target_scaler.inverse_transform(y_pred_test_scaled.reshape(-1, 1)).reshape(-1)

y_true_test = y_test

Step 10: Evaluate (R², NSE, RMSE, PBIAS)

def evaluate_regression(obs: np.ndarray, sim: np.ndarray):
    obs = np.asarray(obs, dtype=np.float64).reshape(-1)
    sim = np.asarray(sim, dtype=np.float64).reshape(-1)

    rmse = float(np.sqrt(np.mean((obs - sim) ** 2)))
    mae = float(np.mean(np.abs(obs - sim)))

    r = float(np.corrcoef(obs, sim)[0, 1])
    r2 = r**2

    nse = 1.0 - float(np.sum((obs - sim) ** 2) / np.sum((obs - np.mean(obs)) ** 2))
    pbias = 100.0 * float(np.sum(obs - sim) / np.sum(obs))

    return {"RMSE": rmse, "MAE": mae, "R2": r2, "NSE": nse, "PBIAS": pbias}


metrics = evaluate_regression(y_true_test, y_pred_test)
print("Test metrics:")
for k, v in metrics.items():
    print(f"  {k}: {v:.4f}")

Plot predicted vs observed over time:

test_index = pd.to_datetime(dates_test)

plt.figure(figsize=(10, 4))
plt.plot(test_index, y_true_test, label="Observed", color="black", linewidth=2, alpha=0.7)
plt.plot(test_index, y_pred_test, label="Predicted", color="tab:blue", linewidth=2, alpha=0.8)
plt.title("LSTM: discharge prediction (test period)")
plt.xlabel("Date")
plt.ylabel("Discharge")
plt.grid(True, alpha=0.3)
plt.legend()
plt.tight_layout()
plt.show()

Notes and common improvements

  • Lookback: for monthly data, LOOKBACK=12 is a strong baseline; try 24 if you have enough data.
  • Multistep forecasting: predict 3–6 months ahead by shifting the label further into the future in make_sequences.
  • More features: add engineered lags or rolling aggregates (e.g., rainfall lag-1/2/3, rolling mean).
  • Regularization: increase dropout, reduce LSTM_UNITS, or add L2 if you see overfitting.
  • Splits for reporting: consider date-based splits (e.g., train 1990–2013, val 2014–2016, test 2017–2019).