This project uses traditional computer vision techniques implemented in Python and OpenCV to identify lane lines on the road
.
The image processing pipeline can be divided into 3 stages which involves the following techniques
- Find Region Of Interest(ROI)
- Image processing
- Gray image Coversion
- Canny Edge Detection
- Hough Lines Transform
- Draw Detected lines
Following are the Libraries and Packages used in the project.
import cv2
import numpy as npThe flow of the proposed Algorithm will be shown in next steps
A video is taken as input, further we extract frames of the video. On each frame processing is performed. Later after line detection the frames are displayed.
cap = cv2.VideoCapture('VID-20191229-WA0064.mp4')
while cap.isOpened():
ret, frame = cap.read()
frame = process(frame)
cv2.imshow('frame', frame)An RGB color image is a vector valued function across the image spatial domain.This is one of the essential steps before detecting lane edges. The reason for doing this conversion from multi-channelled image to single-channelled is because we are only concerned about the intensity data, which means we can safely ignore the colors.The grayscale step will help make the main lane lines stand out.

After conversion of image to grayscale before isolating the region of interest. The next step is to run canny edge detection in OpenCV.
The Canny edge detector is a classic edge detection algorithm. It reduce the amount of information in an image down to the useful structural edges present. It Apply double threshold to determine potential edges.Track edge by hysteresis: Finalize the detection of edges by suppressing all the other edges that are weak and not connected to strong edges.

The region which was found to contain the most amount of road and exclude the most amount of background across the test images was a three sided triangle.
By finding the height and width of video frame we found all three coordinates of the triangle by using the 'image.shape'
- Left bottom corner = (0,height)
- Right bottom corner = (height, width)
- Center corner = (height/2, width/2)

Mask is created which is equal to the size of video frame. Excluding all content outside the ROI mask for the Canny edge detector then further reduces the information in the image to the lane line edges. This elimination of unwanted edges id done using 'cv2.fillpoly'

def roi(img, vertices):
mask = np.zeros_like(img)
mask_color = 255
cv2.fillPoly(mask, vertices, mask_color)
mask_image = cv2.bitwise_and(img, mask)
return mask_imageThe Hough transform is a feature extraction technique that is popular in computer vision and image analysis that is used to find straight lines in an image. Any "x vs y" line can be transformed to a point in Hough space, the basis of which is made up of "m vs b" (gradient vs intercept of the line). The hough transform is the algorithm that converts the representation of straight lines between these two spaces. Any line in an image has a direct mapping to a point in Hough space.
It is often convenient to work with the Hough transform in polar coordinates, doing so results in the following hyper-parameters that again need to be found by empirically testing on test data sets.
rho : Distance resolution in pixels of the Hough grid
theta : Angular resolution in radians of the Hough grid
threshold : Minimum number of votes (Intersections on the Hough Grid)
min line length : Minimum number of pixels making up the line
max line gap : Maximum gap in pixels between the connectable line segments
def process(image):
print(image.shape)
h = image.shape[0]
w = image.shape[1]
roi_vertices = [(0, h), (w/2, h/2),(w, h)]
#gaussian_image=cv2.GaussianBlur(image,(5,5),0)
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
canny = cv2.Canny(gray, 150, 170)
cropped_image = roi(canny,
np.array([roi_vertices], np.int32),)
lines = cv2.HoughLinesP(cropped_image,rho=2, theta=np.pi/180, threshold=50, lines=np.array([]), minLineLength=40, maxLineGap=100)
image_lines = draw_lines(image, lines)
return image_linesA blank image is created which is excatly equal to the size of video frame. The lines on the blank image will be drawn on the coordinates found using hough transform.

def draw_lines(img, lines):
blank_image = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
for line in lines:
for x1, y1, x2, y2 in line:
cv2.line(blank_image, (x1,y1), (x2,y2), (0, 255, 0), thickness=10)
img = cv2.addWeighted(img, 0.8, blank_image, 1, 0.0)
return imgVideo frame is superimposed with marked lines on blank image which will eventually create a frame showing detected lines.
Using the 'cv2.addWeighted' function is it possible to view the dectected lines superimposed on the input image set for all the video frames.
