Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 42 additions & 0 deletions build/190.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
{
"id": "190",
"title": "Implement SqueezeNet Fire Module",
"difficulty": "medium",
"category": "Deep Learning",
"video": "",
"likes": "0",
"dislikes": "0",
"contributor": [
{
"profile_link": "https://github.com/syed-nazmus-sakib",
"name": "Syed Nazmus Sakib"
}
],
"description": "Implement the Fire Module from SqueezeNet, a highly parameter-efficient convolutional neural network architecture. The Fire Module consists of a \"squeeze\" layer (1x1 convolutions) that feeds into an \"expand\" layer (a mix of 1x1 and 3x3 convolutions).\n\nYour task is to implement the forward pass of the Fire Module using NumPy. Given an input tensor and weights/biases for the squeeze and expand layers, compute the output.\n\n**Functions required:**\n\n1. `squeeze`: Applies 1x1 convolution to reduce channels.\n2. `expand`: Applies 1x1 and 3x3 convolutions in parallel and concatenates results.\n3. `fire_module`: Combines squeeze and expand.\n\nAssume stride=1 and padding='same' for 3x3 convolutions (so spatial dimensions are preserved).",
"learn_section": "## SqueezeNet Fire Module Explained\n\nSqueezeNet achieves AlexNet-level accuracy with 50x fewer parameters through its \"Fire Module\".\n\n### Architecture\n\nThe Fire Module consists of two layers:\n\n1. **Squeeze Layer**: A convolution layer with only 1x1 filters.\n - Purpose: Reduce the number of input channels before feeding them to larger 3x3 filters. This minimizes computation and parameters.\n - Notation: $s_{1x1}$ is the number of 1x1 filters in the squeeze layer.\n\n2. **Expand Layer**: A mix of 1x1 and 3x3 convolution filters.\n - Purpose: Capture both local details (3x3) and pointwise features (1x1).\n - Notation:\n - $e_{1x1}$: Number of 1x1 filters in expand layer.\n - $e_{3x3}$: Number of 3x3 filters in expand layer.\n - The outputs of these two parallel convolutions are concatenated along the channel dimension.\n\n### Operation\n\nThe flow is:\n`Input -> Squeeze (1x1 conv + ReLU) -> Expand ( (1x1 conv + ReLU) || (3x3 conv + ReLU) ) -> Concatenate -> Output`\n\n### Why it works?\n\nStandard 3x3 convolution on $C_{in}$ channels with $C_{out}$ filters has parameters:\n$P_{standard} = 3 \\times 3 \\times C_{in} \\times C_{out}$\n\nIn Fire Module:\n\n1. Squeeze layer reduces $C_{in}$ to $s_{1x1}$ (small). Parameters: $1 \\times 1 \\times C_{in} \\times s_{1x1}$.\n2. Expand 3x3 layer operates on only $s_{1x1}$ channels. Parameters: $3 \\times 3 \\times s_{1x1} \\times e_{3x3}$.\n3. Expand 1x1 layer adds more features cheaply. Parameters: $1 \\times 1 \\times s_{1x1} \\times e_{1x1}$.\n\nTotal parameters are significantly lower if $s_{1x1} \\ll C_{in}$.\n\n### Implementation Details\n\n- **Padding**: Use 'same' padding for 3x3 convolutions so spatial dimensions match 1x1 outputs.\n- **Concatenation**: Join along the channel axis (axis=-1 or 1 depending on format, here we assume (H, W, C)).",
"starter_code": "import numpy as np\n\ndef fire_module_forward(input_tensor, \n squeeze_weights, squeeze_bias,\n expand1x1_weights, expand1x1_bias,\n expand3x3_weights, expand3x3_bias):\n \"\"\"\n Implements the forward pass of a SqueezeNet Fire Module.\n \n Args:\n input_tensor: (H, W, C_in)\n squeeze_weights: (1, 1, C_in, s1x1)\n squeeze_bias: (s1x1,)\n expand1x1_weights: (1, 1, s1x1, e1x1)\n expand1x1_bias: (e1x1,)\n expand3x3_weights: (3, 3, s1x1, e3x3)\n expand3x3_bias: (e3x3,)\n \n Returns:\n output_tensor: (H, W, e1x1 + e3x3)\n \n Note:\n - All convolutions use stride=1.\n - 3x3 convolution uses padding='same' (pad=1).\n - Apply ReLU activation after each convolution (max(0, x)).\n \"\"\"\n # Your code here\n pass",
"solution": "import numpy as np\n\ndef relu(x):\n \"\"\"ReLU activation.\"\"\"\n return np.maximum(0, x)\n\ndef conv2d(x, w, b, padding=0):\n \"\"\"\n Apply 2D convolution.\n x: (H, W, C_in)\n w: (kh, kw, C_in, C_out)\n b: (C_out,)\n padding: int (symmetrical padding on H and W)\n \"\"\"\n H, W, C_in = x.shape\n kh, kw, _, C_out = w.shape\n \n # Apply padding if needed\n if padding > 0:\n x_padded = np.pad(x, ((padding, padding), (padding, padding), (0, 0)), mode='constant')\n else:\n x_padded = x\n \n # Output dimensions (assuming stride=1)\n H_out = H\n W_out = W\n \n output = np.zeros((H_out, W_out, C_out))\n \n # Optimization for 1x1 convolution\n if kh == 1 and kw == 1:\n # Flatten spatial dimensions: (H*W, C_in)\n x_flat = x.reshape(-1, C_in)\n # Weights: (C_in, C_out)\n w_flat = w.reshape(C_in, C_out)\n # Matrix multiplication + bias\n out_flat = np.dot(x_flat, w_flat) + b\n return out_flat.reshape(H, W, C_out)\n \n # General convolution (loops)\n for h in range(H_out):\n for w_idx in range(W_out):\n # Extract patch\n patch = x_padded[h:h+kh, w_idx:w_idx+kw, :]\n # Convolution for each output channel\n for c in range(C_out):\n kernel = w[:, :, :, c]\n output[h, w_idx, c] = np.sum(patch * kernel) + b[c]\n \n return output\n\ndef fire_module_forward(input_tensor, \n squeeze_weights, squeeze_bias,\n expand1x1_weights, expand1x1_bias,\n expand3x3_weights, expand3x3_bias):\n \"\"\"\n Implements the forward pass of a SqueezeNet Fire Module.\n \"\"\"\n # 1. Squeeze Layer (1x1 conv)\n squeeze_out = conv2d(input_tensor, squeeze_weights, squeeze_bias)\n squeeze_act = relu(squeeze_out)\n \n # 2. Expand Layer\n # Branch 1: 1x1 conv\n expand1x1_out = conv2d(squeeze_act, expand1x1_weights, expand1x1_bias)\n expand1x1_act = relu(expand1x1_out)\n \n # Branch 2: 3x3 conv (with padding='same' -> pad=1)\n expand3x3_out = conv2d(squeeze_act, expand3x3_weights, expand3x3_bias, padding=1)\n expand3x3_act = relu(expand3x3_out)\n \n # 3. Concatenate along channel axis\n output = np.concatenate([expand1x1_act, expand3x3_act], axis=-1)\n \n return output",
"example": {
"input": "imput_tensor: (H=32, W=32, C_in=3)\nSqueeze 1x1: s1x1=16 filters\nExpand 1x1: e1x1=64 filters\nExpand 3x3: e3x3=64 filters",
"output": "Output Tensor Shape: (32, 32, 128)\nValues: Concatenation of [ReLU(Expand1x1), ReLU(Expand3x3)]",
"reasoning": "1. Squeeze layer reduces 3 channels to 16 channels using 1x1 convolution + ReLU.\n2. Expand layer splits into two branches:\n - Branch A: Apply 64 1x1 filters to the 16-channel squeeze output -> Output shape (32, 32, 64).\n - Branch B: Apply 64 3x3 filters (with padding) to the 16-channel squeeze output -> Output shape (32, 32, 64).\n3. Concatenate the outputs (64 + 64) to get a final depth of 128. Spatial dimensions (32x32) remain unchanged."
},
"test_cases": [
{
"test": "import numpy as np\ninput = np.ones((5, 5, 2))\ns_w = np.ones((1, 1, 2, 1))\ns_b = np.zeros(1)\ne1_w = np.ones((1, 1, 1, 2))\ne1_b = np.zeros(2)\ne3_w = np.ones((3, 3, 1, 3))\ne3_b = np.zeros(3)\nres = fire_module_forward(input, s_w, s_b, e1_w, e1_b, e3_w, e3_b)\nprint(res.shape)",
"expected_output": "(5, 5, 5)"
},
{
"test": "import numpy as np\n# Simple value test\n# Input 1s (shape 3,3,1)\ninput = np.ones((3, 3, 1))\n# Squeeze: 1 filter, weight 2, bias 0 -> Output 2s\ns_w = np.full((1, 1, 1, 1), 2.0)\ns_b = np.zeros(1)\n# Expand 1x1: 1 filter, weight 0.5, bias 0 -> Output 2*0.5 = 1s\ne1_w = np.full((1, 1, 1, 1), 0.5)\ne1_b = np.zeros(1)\n# Expand 3x3: 1 filter, weight 0, bias 0 -> Output 0s\ne3_w = np.zeros((3, 3, 1, 1))\ne3_b = np.zeros(1)\nres = fire_module_forward(input, s_w, s_b, e1_w, e1_b, e3_w, e3_b)\n# Check first channel (Expand 1x1) at middle pixel\nprint(res[1, 1, 0])",
"expected_output": "1.0"
},
{
"test": "import numpy as np\n# Padding test for 3x3\ninput = np.ones((3, 3, 1))\n# Squeeze: weight 1 -> Output 1s\ns_w = np.ones((1, 1, 1, 1))\ns_b = np.zeros(1)\n# Expand 1x1: weights 0 (ignore)\ne1_w = np.zeros((1, 1, 1, 1))\ne1_b = np.zeros(1)\n# Expand 3x3: weight 1 -> Sum neighbors\n# At corner (0,0), with 'same' padding (zeros), only 4 neighbors are non-zero (1s)\n# Sum = 4 * 1 = 4\ne3_w = np.ones((3, 3, 1, 1))\ne3_b = np.zeros(1)\nres = fire_module_forward(input, s_w, s_b, e1_w, e1_b, e3_w, e3_b)\n# Check second channel (Expand 3x3) at corner (0,0)\nprint(res[0, 0, 1])",
"expected_output": "4.0"
},
{
"test": "import numpy as np\n# Middle pixel full sum\n# Same setup as above, check middle pixel (1,1)\n# 9 neighbors are 1s -> Sum = 9\ninput = np.ones((3, 3, 1))\ns_w = np.ones((1, 1, 1, 1))\ns_b = np.zeros(1)\ne1_w = np.zeros((1, 1, 1, 1))\ne1_b = np.zeros(1)\ne3_w = np.ones((3, 3, 1, 1))\ne3_b = np.zeros(1)\nres = fire_module_forward(input, s_w, s_b, e1_w, e1_b, e3_w, e3_b)\nprint(res[1, 1, 1])",
"expected_output": "9.0"
}
]
}
Binary file not shown.
11 changes: 11 additions & 0 deletions questions/190_implement-squeezenet-fire-module/description.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
Implement the Fire Module from SqueezeNet, a highly parameter-efficient convolutional neural network architecture. The Fire Module consists of a "squeeze" layer (1x1 convolutions) that feeds into an "expand" layer (a mix of 1x1 and 3x3 convolutions).

Your task is to implement the forward pass of the Fire Module using NumPy. Given an input tensor and weights/biases for the squeeze and expand layers, compute the output.

**Functions required:**

1. `squeeze`: Applies 1x1 convolution to reduce channels.
2. `expand`: Applies 1x1 and 3x3 convolutions in parallel and concatenates results.
3. `fire_module`: Combines squeeze and expand.

Assume stride=1 and padding='same' for 3x3 convolutions (so spatial dimensions are preserved).
5 changes: 5 additions & 0 deletions questions/190_implement-squeezenet-fire-module/example.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"input": "imput_tensor: (H=32, W=32, C_in=3)\nSqueeze 1x1: s1x1=16 filters\nExpand 1x1: e1x1=64 filters\nExpand 3x3: e3x3=64 filters",
"output": "Output Tensor Shape: (32, 32, 128)\nValues: Concatenation of [ReLU(Expand1x1), ReLU(Expand3x3)]",
"reasoning": "1. Squeeze layer reduces 3 channels to 16 channels using 1x1 convolution + ReLU.\n2. Expand layer splits into two branches:\n - Branch A: Apply 64 1x1 filters to the 16-channel squeeze output -> Output shape (32, 32, 64).\n - Branch B: Apply 64 3x3 filters (with padding) to the 16-channel squeeze output -> Output shape (32, 32, 64).\n3. Concatenate the outputs (64 + 64) to get a final depth of 128. Spatial dimensions (32x32) remain unchanged."
}
41 changes: 41 additions & 0 deletions questions/190_implement-squeezenet-fire-module/learn.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
## SqueezeNet Fire Module Explained

SqueezeNet achieves AlexNet-level accuracy with 50x fewer parameters through its "Fire Module".

### Architecture

The Fire Module consists of two layers:

1. **Squeeze Layer**: A convolution layer with only 1x1 filters.
- Purpose: Reduce the number of input channels before feeding them to larger 3x3 filters. This minimizes computation and parameters.
- Notation: $s_{1x1}$ is the number of 1x1 filters in the squeeze layer.

2. **Expand Layer**: A mix of 1x1 and 3x3 convolution filters.
- Purpose: Capture both local details (3x3) and pointwise features (1x1).
- Notation:
- $e_{1x1}$: Number of 1x1 filters in expand layer.
- $e_{3x3}$: Number of 3x3 filters in expand layer.
- The outputs of these two parallel convolutions are concatenated along the channel dimension.

### Operation

The flow is:
`Input -> Squeeze (1x1 conv + ReLU) -> Expand ( (1x1 conv + ReLU) || (3x3 conv + ReLU) ) -> Concatenate -> Output`

### Why it works?

Standard 3x3 convolution on $C_{in}$ channels with $C_{out}$ filters has parameters:
$P_{standard} = 3 \times 3 \times C_{in} \times C_{out}$

In Fire Module:

1. Squeeze layer reduces $C_{in}$ to $s_{1x1}$ (small). Parameters: $1 \times 1 \times C_{in} \times s_{1x1}$.
2. Expand 3x3 layer operates on only $s_{1x1}$ channels. Parameters: $3 \times 3 \times s_{1x1} \times e_{3x3}$.
3. Expand 1x1 layer adds more features cheaply. Parameters: $1 \times 1 \times s_{1x1} \times e_{1x1}$.

Total parameters are significantly lower if $s_{1x1} \ll C_{in}$.

### Implementation Details

- **Padding**: Use 'same' padding for 3x3 convolutions so spatial dimensions match 1x1 outputs.
- **Concatenation**: Join along the channel axis (axis=-1 or 1 depending on format, here we assume (H, W, C)).
15 changes: 15 additions & 0 deletions questions/190_implement-squeezenet-fire-module/meta.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
{
"id": "190",
"title": "Implement SqueezeNet Fire Module",
"difficulty": "medium",
"category": "Deep Learning",
"video": "",
"likes": "0",
"dislikes": "0",
"contributor": [
{
"profile_link": "https://github.com/syed-nazmus-sakib",
"name": "Syed Nazmus Sakib"
}
]
}
75 changes: 75 additions & 0 deletions questions/190_implement-squeezenet-fire-module/solution.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
import numpy as np

def relu(x):
"""ReLU activation."""
return np.maximum(0, x)

def conv2d(x, w, b, padding=0):
"""
Apply 2D convolution.
x: (H, W, C_in)
w: (kh, kw, C_in, C_out)
b: (C_out,)
padding: int (symmetrical padding on H and W)
"""
H, W, C_in = x.shape
kh, kw, _, C_out = w.shape

# Apply padding if needed
if padding > 0:
x_padded = np.pad(x, ((padding, padding), (padding, padding), (0, 0)), mode='constant')
else:
x_padded = x

# Output dimensions (assuming stride=1)
H_out = H
W_out = W

output = np.zeros((H_out, W_out, C_out))

# Optimization for 1x1 convolution
if kh == 1 and kw == 1:
# Flatten spatial dimensions: (H*W, C_in)
x_flat = x.reshape(-1, C_in)
# Weights: (C_in, C_out)
w_flat = w.reshape(C_in, C_out)
# Matrix multiplication + bias
out_flat = np.dot(x_flat, w_flat) + b
return out_flat.reshape(H, W, C_out)

# General convolution (loops)
for h in range(H_out):
for w_idx in range(W_out):
# Extract patch
patch = x_padded[h:h+kh, w_idx:w_idx+kw, :]
# Convolution for each output channel
for c in range(C_out):
kernel = w[:, :, :, c]
output[h, w_idx, c] = np.sum(patch * kernel) + b[c]

return output

def fire_module_forward(input_tensor,
squeeze_weights, squeeze_bias,
expand1x1_weights, expand1x1_bias,
expand3x3_weights, expand3x3_bias):
"""
Implements the forward pass of a SqueezeNet Fire Module.
"""
# 1. Squeeze Layer (1x1 conv)
squeeze_out = conv2d(input_tensor, squeeze_weights, squeeze_bias)
squeeze_act = relu(squeeze_out)

# 2. Expand Layer
# Branch 1: 1x1 conv
expand1x1_out = conv2d(squeeze_act, expand1x1_weights, expand1x1_bias)
expand1x1_act = relu(expand1x1_out)

# Branch 2: 3x3 conv (with padding='same' -> pad=1)
expand3x3_out = conv2d(squeeze_act, expand3x3_weights, expand3x3_bias, padding=1)
expand3x3_act = relu(expand3x3_out)

# 3. Concatenate along channel axis
output = np.concatenate([expand1x1_act, expand3x3_act], axis=-1)

return output
28 changes: 28 additions & 0 deletions questions/190_implement-squeezenet-fire-module/starter_code.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
import numpy as np

def fire_module_forward(input_tensor,
squeeze_weights, squeeze_bias,
expand1x1_weights, expand1x1_bias,
expand3x3_weights, expand3x3_bias):
"""
Implements the forward pass of a SqueezeNet Fire Module.

Args:
input_tensor: (H, W, C_in)
squeeze_weights: (1, 1, C_in, s1x1)
squeeze_bias: (s1x1,)
expand1x1_weights: (1, 1, s1x1, e1x1)
expand1x1_bias: (e1x1,)
expand3x3_weights: (3, 3, s1x1, e3x3)
expand3x3_bias: (e3x3,)

Returns:
output_tensor: (H, W, e1x1 + e3x3)

Note:
- All convolutions use stride=1.
- 3x3 convolution uses padding='same' (pad=1).
- Apply ReLU activation after each convolution (max(0, x)).
"""
# Your code here
pass
18 changes: 18 additions & 0 deletions questions/190_implement-squeezenet-fire-module/tests.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
[
{
"test": "import numpy as np\ninput = np.ones((5, 5, 2))\ns_w = np.ones((1, 1, 2, 1))\ns_b = np.zeros(1)\ne1_w = np.ones((1, 1, 1, 2))\ne1_b = np.zeros(2)\ne3_w = np.ones((3, 3, 1, 3))\ne3_b = np.zeros(3)\nres = fire_module_forward(input, s_w, s_b, e1_w, e1_b, e3_w, e3_b)\nprint(res.shape)",
"expected_output": "(5, 5, 5)"
},
{
"test": "import numpy as np\n# Simple value test\n# Input 1s (shape 3,3,1)\ninput = np.ones((3, 3, 1))\n# Squeeze: 1 filter, weight 2, bias 0 -> Output 2s\ns_w = np.full((1, 1, 1, 1), 2.0)\ns_b = np.zeros(1)\n# Expand 1x1: 1 filter, weight 0.5, bias 0 -> Output 2*0.5 = 1s\ne1_w = np.full((1, 1, 1, 1), 0.5)\ne1_b = np.zeros(1)\n# Expand 3x3: 1 filter, weight 0, bias 0 -> Output 0s\ne3_w = np.zeros((3, 3, 1, 1))\ne3_b = np.zeros(1)\nres = fire_module_forward(input, s_w, s_b, e1_w, e1_b, e3_w, e3_b)\n# Check first channel (Expand 1x1) at middle pixel\nprint(res[1, 1, 0])",
"expected_output": "1.0"
},
{
"test": "import numpy as np\n# Padding test for 3x3\ninput = np.ones((3, 3, 1))\n# Squeeze: weight 1 -> Output 1s\ns_w = np.ones((1, 1, 1, 1))\ns_b = np.zeros(1)\n# Expand 1x1: weights 0 (ignore)\ne1_w = np.zeros((1, 1, 1, 1))\ne1_b = np.zeros(1)\n# Expand 3x3: weight 1 -> Sum neighbors\n# At corner (0,0), with 'same' padding (zeros), only 4 neighbors are non-zero (1s)\n# Sum = 4 * 1 = 4\ne3_w = np.ones((3, 3, 1, 1))\ne3_b = np.zeros(1)\nres = fire_module_forward(input, s_w, s_b, e1_w, e1_b, e3_w, e3_b)\n# Check second channel (Expand 3x3) at corner (0,0)\nprint(res[0, 0, 1])",
"expected_output": "4.0"
},
{
"test": "import numpy as np\n# Middle pixel full sum\n# Same setup as above, check middle pixel (1,1)\n# 9 neighbors are 1s -> Sum = 9\ninput = np.ones((3, 3, 1))\ns_w = np.ones((1, 1, 1, 1))\ns_b = np.zeros(1)\ne1_w = np.zeros((1, 1, 1, 1))\ne1_b = np.zeros(1)\ne3_w = np.ones((3, 3, 1, 1))\ne3_b = np.zeros(1)\nres = fire_module_forward(input, s_w, s_b, e1_w, e1_b, e3_w, e3_b)\nprint(res[1, 1, 1])",
"expected_output": "9.0"
}
]
2 changes: 1 addition & 1 deletion utils/build_bundle.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ def bundle_one(folder: pathlib.Path):
meta[f"{lang}_test_cases"] = load_json(sub / "tests.json")

out_path = OUTDIR / f"{meta['id']}.json"
out_path.write_text(json.dumps(meta, indent=2, ensure_ascii=False))
out_path.write_text(json.dumps(meta, indent=2, ensure_ascii=False), encoding="utf-8")
print(f"✓ bundled {out_path.name}")

def main():
Expand Down