AI7 min read

Recurrent Neural Networks

Neural networks for sequential data.

Robert Anderson
December 18, 2025
0.0k0

AI with memory.

What are RNNs?

Neural networks that remember previous inputs.

Perfect for sequences: text, time series, speech!

Why Special?

Regular neural networks forget previous inputs.

RNNs have memory - like reading a story!

How RNNs Work

Each step: 1. Read current input 2. Remember previous state 3. Combine both 4. Output prediction

Like predicting next word: "The weather in Miami is..." → RNN remembers context → predicts "sunny"

Simple RNN in Python

```python from tensorflow.keras.models import Sequential from tensorflow.keras.layers import LSTM, Dense

model = Sequential([ LSTM(50, return_sequences=True, input_shape=(timesteps, features)), LSTM(50), Dense(1) ])

model.compile(optimizer='adam', loss='mse') ```

Types of RNNs

**1. Simple RNN**: Basic memory (rarely used now)

**2. LSTM** (Long Short-Term Memory): - Better memory - Avoids vanishing gradient - Most popular

**3. GRU** (Gated Recurrent Unit): - Simpler than LSTM - Faster - Similar performance

Example - Stock Price Prediction

```python # Prepare data: last 60 days → predict next day X = [] # Past 60 days y = [] # Next day price

for i in range(60, len(prices)): X.append(prices[i-60:i]) y.append(prices[i])

Build LSTM model = Sequential([ LSTM(50, return_sequences=True, input_shape=(60, 1)), LSTM(50), Dense(1) ])

model.fit(X, y, epochs=50) ```

Applications

- Stock price prediction - Language translation - Text generation - Speech recognition - Music generation

Challenges

- Slow to train - Can't parallelize well - Vanishing gradient problem

Modern Alternative - Transformers

Newer architecture (used in ChatGPT): - No recurrence - Parallel training - Better performance

Remember

- RNNs for sequences - LSTM most common - Transformers are replacing RNNs

#AI#Intermediate#RNN