This Project Cannot Be Uploaded While Refrecing Videos From the Camera

Last updated on July 9, 2021.

In this tutorial, you will learn how to employ OpenCV to stream video from a webcam to a web browser/HTML page using Flask and Python.

Ever have your machine stolen?

Mine was stolen over the weekend. And permit me tell you lot, I'thou pissed.

I can't share also many details equally information technology's an active criminal investigation, but here's what I can tell you:

My wife and I moved to Philadelphia, PA from Norwalk, CT virtually six months ago. I have a car, which I don't drive often, simply still keep just in example of emergencies.

Parking is hard to detect in our neighborhood, and then I was in need of a parking garage.

I heard about a garage, signed up, and started parking my car there.

Fast forward to this past Lord's day.

My married woman and I arrive at the parking garage to grab my car. Nosotros were almost to caput downwardly to Maryland to visit my parents and have some blue crab (Maryland is famous for its crabs).

I walked to my car and took off the cover.

I was immediately dislocated — this isn't my car.

Where the #$&@ is my car?

After a few short minutes I realized the reality — my motorcar was stolen.

Over the past week, my piece of work on my upcoming Raspberry Pi for Computer Vision book was interrupted — I've been working with the owner of the the parking garage, the Philadelphia Law Department, and the GPS tracking service on my car to figure out what happened.

I can't publicly go into any details until it's resolved, only allow me tell you, there'south a whole mess of paperwork, law reports, attorney messages, and insurance claims that I'yard wading neck-deep through.

I'g hoping that this issue gets resolved in the next month — I detest distractions, especially distractions that have me abroad from what I love doing the most — pedagogy computer vision and deep learning.

I've managed to use my frustrations to inspire a new security-related reckoner vision blog postal service.

In this post, we'll learn how to stream video to a web browser using Flask and OpenCV.

You lot will exist able to deploy the system on a Raspberry Pi in less than 5 minutes:

  • Only install the required packages/software and showtime the script.
  • Then open your reckoner/smartphone browser to navigate to the URL/IP address to watch the video feed (and ensure nothing of yours is stolen).

There's zilch similar a trivial video evidence to catch thieves.

While I continue to practice paperwork with the police, insurance, etc, you can begin to arm yourself with Raspberry Pi cameras to catch bad guys wherever you alive and work.

To learn how to use OpenCV and Flask to stream video to a spider web browser HTML folio, just go on reading!

  • Update July 2021: Added two new sections. The first department provides suggestions for using Django as an alternative to the Flask web framework. The second section discusses using ImageZMQ to stream live video over a network from multiple photographic camera sources to a single fundamental server.

Looking for the source code to this post?

Jump Right To The Downloads Section

OpenCV – Stream video to web browser/HTML page

In this tutorial we will begin by discussing Flask, a micro web framework for the Python programming linguistic communication.

We'll learn the fundamentals of movement detection and then that nosotros can apply it to our project. We'll proceed to implement motility detection by means of a background subtractor.

From at that place, we will combine Flask with OpenCV, enabling the states to:

  1. Access frames from RPi photographic camera module or USB webcam.
  2. Process the frames and apply an arbitrary algorithm (hither we'll exist using background subtraction/motility detection, but you lot could apply prototype classification, object detection, etc.).
  3. Stream the results to a web page/web browser.

Additionally, the code we'll be covering volition be able to support multiple clients (i.e., more than i person/web browser/tab accessing the stream at once), something the vast majority of examples you will find online cannot handle.

Putting all these pieces together results in a habitation surveillance system capable of performing motion detection and so streaming the video event to your web browser.

Let'southward get started!

The Flask web framework

Figure 1: Flask is a micro web framework for Python (paradigm source).

In this section we'll briefly discuss the Flask web framework and how to install information technology on your system.

Flask is a pop micro web framework written in the Python programming language.

Along with Django, Flask is one of the almost mutual web frameworks y'all'll meet when edifice spider web applications using Python.

Yet, unlike Django, Flask is very lightweight, making it super easy to build basic web applications.

Every bit nosotros'll see in this section, we'll only demand a small corporeality of code to facilitate live video streaming with Flask — the rest of the code either involves (i) OpenCV and accessing our video stream or (ii) ensuring our code is thread rubber and tin can handle multiple clients.

If you always need to install Flask on a automobile, it's every bit simple as the following command:

$ pip install flask          

While you're at it, go alee and install NumPy, OpenCV, and imutils:

$ pip install numpy $ pip install opencv-contrib-python $ pip install imutils          

Note: If you'd like the total-install of OpenCV including "non-gratuitous" (patented) algorithms, be sure to compile OpenCV from source.

Project structure

Before we motility on, let's accept a expect at our directory structure for the project:

$ tree --dirsfirst . ├── pyimagesearch │   ├── motion_detection │   │   ├── __init__.py │   │   └── singlemotiondetector.py │   └── __init__.py ├── templates │   └── alphabetize.html └── webstreaming.py  3 directories, five files          

To perform background subtraction and motion detection nosotros'll be implementing a form named SingleMotionDetector — this class volition live within the singlemotiondetector.py file found in the motion_detection submodule of pyimagesearch.

The webstreaming.py file will apply OpenCV to admission our web photographic camera, perform motion detection via SingleMotionDetector, and then serve the output frames to our web browser via the Flask web framework.

In order for our spider web browser to accept something to display, nosotros demand to populate the contents of index.html with HTML used to serve the video feed. We'll only need to insert some basic HTML markup — Flask will handle actually sending the video stream to our browser for us.

Implementing a basic motion detector

Figure two: Video surveillance with Raspberry Pi, OpenCV, Flask and web streaming. Past use of background subtraction for motion detection, we take detected motion where I am moving in my chair.

Our motion detector algorithm will detect motility by form of background subtraction .

Most background subtraction algorithms piece of work by:

  1. Accumulating the weighted average of the previous North frames
  2. Taking the current frame and subtracting it from the weighted average of frames
  3. Thresholding the output of the subtraction to highlight the regions with substantial differences in pixel values ("white" for foreground and "blackness" for groundwork)
  4. Applying basic prototype processing techniques such as erosions and dilations to remove noise
  5. Utilizing contour detection to extract the regions containing movement

Our motility detection implementation will live within the SingleMotionDetector class which tin can be found in singlemotiondetector.py.

We call this a "single motion detector" every bit the algorithm itself is only interested in finding the single, largest region of motion.

Nosotros can easily extend this method to handle multiple regions of motion too.

Let'south go ahead and implement the motion detector.

Open up upward the singlemotiondetector.py file and insert the post-obit code:

# import the necessary packages import numpy as np import imutils import cv2  class SingleMotionDetector: 	def __init__(self, accumWeight=0.5): 		# store the accumulated weight cistron 		self.accumWeight = accumWeight  		# initialize the background model 		cocky.bg = None          

Lines 2-four handle our required imports.

All of these are fairly standard, including NumPy for numerical processing, imutils for our convenience functions, and cv2 for our OpenCV bindings.

We and then ascertain our SingleMotionDetector course on Line 6. The class accepts an optional statement, accumWeight, which is the factor used to our accumulated weighted average.

The larger accumWeight is, the less the background (bg) will be factored in when accumulating the weighted boilerplate.

Conversely, the smaller accumWeight is, the more than the background bg will be considered when computing the boilerplate.

Setting accumWeight=0.five weights both the background and foreground evenly — I often recommend this equally a starting point value (you tin can and then adjust it based on your own experiments).

Side by side, let's define the update method which will accept an input frame and compute the weighted boilerplate:

            def update(self, paradigm): 		# if the groundwork model is None, initialize it 		if self.bg is None: 			self.bg = image.copy().astype("float") 			return  		# update the background model by accumulating the weighted 		# average 		cv2.accumulateWeighted(epitome, cocky.bg, cocky.accumWeight)          

In the case that our bg frame is None (implying that update has never been chosen), we only store the bg frame (Lines 15-18).

Otherwise, we compute the weighted average between the input frame, the existing background bg, and our respective accumWeight factor.

Given our background bg we can now employ move detection via the detect method:

            def find(self, prototype, tVal=25): 		# compute the absolute divergence between the background model 		# and the image passed in, then threshold the delta image 		delta = cv2.absdiff(self.bg.astype("uint8"), prototype) 		thresh = cv2.threshold(delta, tVal, 255, cv2.THRESH_BINARY)[i]  		# perform a series of erosions and dilations to remove small 		# blobs 		thresh = cv2.erode(thresh, None, iterations=2) 		thresh = cv2.amplify(thresh, None, iterations=two)          

The discover method requires a unmarried parameter forth with an optional one:

  • prototype: The input frame/image that motion detection will be applied to.
  • tVal: The threshold value used to marking a particular pixel every bit "motion" or not.

Given our input image nosotros compute the absolute difference between the image and the bg (Line 27).

Whatsoever pixel locations that have a difference > tVal are prepare to 255 (white; foreground), otherwise they are set to 0 (black; background) (Line 28).

A series of erosions and dilations are performed to remove noise and minor, localized areas of motion that would otherwise be considered false-positives (likely due to reflections or rapid changes in light).

The next step is to apply contour detection to extract whatsoever movement regions:

            # find contours in the thresholded image and initialize the 		# minimum and maximum bounding box regions for motility 		cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, 			cv2.CHAIN_APPROX_SIMPLE) 		cnts = imutils.grab_contours(cnts) 		(minX, minY) = (np.inf, np.inf) 		(maxX, maxY) = (-np.inf, -np.inf)          

Lines 37-39 perform contour detection on our thresh image.

We then initialize two sets of bookkeeping variables to keep track of the location where any motion is contained (Lines 40 and 41). These variables volition form the "bounding box" which will tell us the location of where the move is taking identify.

The final step is to populate these variables (provided motion exists in the frame, of form):

            # if no contours were establish, return None 		if len(cnts) == 0: 			return None  		# otherwise, loop over the contours 		for c in cnts: 			# compute the bounding box of the contour and employ it to 			# update the minimum and maximum bounding box regions 			(10, y, w, h) = cv2.boundingRect(c) 			(minX, minY) = (min(minX, 10), min(minY, y)) 			(maxX, maxY) = (max(maxX, x + w), max(maxY, y + h))  		# otherwise, return a tuple of the thresholded paradigm forth 		# with bounding box 		render (thresh, (minX, minY, maxX, maxY))          

On Lines 43-45 we make a check to run across if our contours listing is empty.

If that'south the case, then there was no motion establish in the frame and we can safely ignore it.

Otherwise, motion does exist in the frame then we demand to start looping over the contours (Line 48).

For each contour we compute the bounding box and then update our accounting variables (Lines 47-53), finding the minimum and maximum (10, y)-coordinates that all motion has taken identify information technology.

Finally, we render the bounding box location to the calling role.

Combining OpenCV with Flask

Figure three: OpenCV and Flask (a Python micro spider web framework) brand the perfect pair for spider web streaming and video surveillance projects involving the Raspberry Pi and similar hardware.

Let'south go ahead and combine OpenCV with Flask to serve up frames from a video stream (running on a Raspberry Pi) to a spider web browser.

Open up the webstreaming.py file in your projection structure and insert the post-obit code:

# import the necessary packages from pyimagesearch.motion_detection import SingleMotionDetector from imutils.video import VideoStream from flask import Response from flask import Flask from flask import render_template import threading import argparse import datetime import imutils import time import cv2          

Lines 2-12 handle our required imports:

  • Line 2 imports our SingleMotionDetector class which we implemented above.
  • The VideoStream class (Line 3) will enable the states to access our Raspberry Pi camera module or USB webcam.
  • Lines four-half-dozen handle importing our required Flask packages — we'll be using these packages to render our alphabetize.html template and serve it upwards to clients.
  • Line seven imports the threading library to ensure nosotros can support concurrency (i.due east., multiple clients, web browsers, and tabs at the same time).

Let'due south move on to performing a few initializations:

# initialize the output frame and a lock used to ensure thread-safe # exchanges of the output frames (useful when multiple browsers/tabs # are viewing the stream) outputFrame = None lock = threading.Lock()  # initialize a flask object app = Flask(__name__)  # initialize the video stream and allow the photographic camera sensor to # warmup #vs = VideoStream(usePiCamera=ane).start() vs = VideoStream(src=0).start() fourth dimension.sleep(two.0)          

Starting time, we initialize our outputFrame on Line 17 — this will exist the frame (mail service-motion detection) that will be served to the clients.

We so create a lock on Line 18 which will exist used to ensure thread-rubber behavior when updating the ouputFrame (i.e., ensuring that one thread isn't trying to read the frame as it is existence updated).

Line 21 initialize our Flask app itself while Lines 25-27 admission our video stream:

  • If you lot are using a USB webcam, you lot can exit the code as is.
  • However, if you are using a RPi camera module you should uncomment Line 25 and annotate out Line 26.

The next office, index, volition render our alphabetize.html template and serve up the output video stream:

@app.road("/") def index(): 	# return the rendered template 	return render_template("alphabetize.html")          

This function is quite simplistic — all information technology's doing is calling the Flask render_template on our HTML file.

Nosotros'll be reviewing the index.html file in the next section so we'll hold off on a further discussion on the file contents until then.

Our next function is responsible for:

  1. Looping over frames from our video stream
  2. Applying motion detection
  3. Drawing any results on the outputFrame

And furthermore, this office must perform all of these operations in a thread safe way to ensure concurrency is supported.

Let'due south accept a await at this office now:

def detect_motion(frameCount): 	# grab global references to the video stream, output frame, and 	# lock variables 	global vs, outputFrame, lock  	# initialize the motion detector and the full number of frames 	# read thus far 	md = SingleMotionDetector(accumWeight=0.1) 	total = 0          

Our detection_motion function accepts a single argument, frameCount, which is the minimum number of required frames to build our background bg in the SingleMotionDetector class:

  • If nosotros don't have at least frameCount frames, we'll continue to compute the accumulated weighted average.
  • Once frameCount is reached, we'll start performing groundwork subtraction.

Line 37 grabs global references to three variables:

  • vs: Our instantiated VideoStream object
  • outputFrame: The output frame that volition be served to clients
  • lock: The thread lock that we must obtain before updating outputFrame

Line 41 initializes our SingleMotionDetector grade with a value of accumWeight=0.one, implying that the bg value will exist weighted higher when computing the weighted average.

Line 42 and then initializes the total number of frames read thus far — we'll need to ensure a sufficient number of frames have been read to build our background model.

From in that location, nosotros'll be able to perform background subtraction.

With these initializations complete, nosotros can now commencement looping over frames from the photographic camera:

            # loop over frames from the video stream 	while True: 		# read the adjacent frame from the video stream, resize information technology, 		# convert the frame to grayscale, and blur it 		frame = vs.read() 		frame = imutils.resize(frame, width=400) 		gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) 		grey = cv2.GaussianBlur(gray, (seven, vii), 0)  		# take hold of the current timestamp and depict it on the frame 		timestamp = datetime.datetime.at present() 		cv2.putText(frame, timestamp.strftime( 			"%A %d %B %Y %I:%M:%S%p"), (10, frame.shape[0] - 10), 			cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)          

Line 48 reads the adjacent frame from our camera while Lines 49-51 perform preprocessing, including:

  • Resizing to accept a width of 400px (the smaller our input frame is, the less information there is, and thus the faster our algorithms will run).
  • Converting to grayscale.
  • Gaussian blurring (to reduce noise).

We then grab the current timestamp and draw it on the frame (Lines 54-57).

With one final bank check, we can perform motion detection:

            # if the total number of frames has reached a sufficient 		# number to construct a reasonable groundwork model, then 		# proceed to process the frame 		if total > frameCount: 			# notice motion in the image 			move = doctor.detect(gray)  			# check to see if motion was found in the frame 			if movement is non None: 				# unpack the tuple and draw the box surrounding the 				# "motion area" on the output frame 				(thresh, (minX, minY, maxX, maxY)) = motility 				cv2.rectangle(frame, (minX, minY), (maxX, maxY), 					(0, 0, 255), 2) 		 		# update the background model and increase the total number 		# of frames read thus far 		doctor.update(gray) 		total += 1  		# acquire the lock, prepare the output frame, and release the 		# lock 		with lock: 			outputFrame = frame.copy()          

On Line 62 nosotros ensure that nosotros accept read at to the lowest degree frameCount frames to build our background subtraction model.

If so, nosotros apply the .detect motion of our motion detector, which returns a single variable, move.

If motion is None, then we know no motion has taken identify in the electric current frame. Otherwise, if motility is not None (Line 67), so we demand to draw the bounding box coordinates of the motion region on the frame.

Line 76 updates our movement detection groundwork model while Line 77 increments the total number of frames read from the photographic camera thus far.

Finally, Line 81 acquires the lock required to support thread concurrency while Line 82 sets the outputFrame.

Nosotros demand to acquire the lock to ensure the outputFrame variable is non accidentally being read by a client while nosotros are trying to update information technology.

Our side by side function, generate , is a Python generator used to encode our outputFrame equally JPEG data — let'south accept a await at information technology now:

def generate(): 	# grab global references to the output frame and lock variables 	global outputFrame, lock  	# loop over frames from the output stream 	while True: 		# look until the lock is acquired 		with lock: 			# bank check if the output frame is bachelor, otherwise skip 			# the iteration of the loop 			if outputFrame is None: 				go along  			# encode the frame in JPEG format 			(flag, encodedImage) = cv2.imencode(".jpg", outputFrame)  			# ensure the frame was successfully encoded 			if not flag: 				continue  		# yield the output frame in the byte format 		yield(b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' +  			bytearray(encodedImage) + b'\r\n')          

Line 86 grabs global references to our outputFrame and lock, like to the detect_motion function.

Then generate starts an infinite loop on Line 89 that volition proceed until we kill the script.

Inside the loop, nosotros:

  • First acquire the lock (Line 91).
  • Ensure the outputFrame is not empty (Line 94), which may happen if a frame is dropped from the camera sensor.
  • Encode the frame as a JPEG image on Line 98 — JPEG pinch is performed hither to reduce load on the network and ensure faster transmission of frames.
  • Cheque to see if the success flag has failed (Lines 101 and 102), implying that the JPEG compression failed and we should ignore the frame.
  • Finally, serve the encoded JPEG frame as a byte assortment that can be consumed past a web browser.

That was quite a lot of work in a short amount of lawmaking, so definitely make sure y'all review this function a few times to ensure you lot understand how information technology works.

The next function, video_feed calls our generate office:

@app.road("/video_feed") def video_feed(): 	# return the response generated along with the specific media 	# blazon (mime type) 	return Response(generate(), 		mimetype = "multipart/x-mixed-supplant; boundary=frame")          

Notice how this function as the app.route signature, just like the alphabetize function above.

The app.route signature tells Flask that this function is a URL endpoint and that data is being served from http://your_ip_address/video_feed.

The output of video_feed is the live motion detection output, encoded as a byte assortment via the generate function. Your spider web browser is smart enough to take this byte array and display information technology in your browser equally a live feed.

Our last code block handles parsing control line arguments and launching the Flask app:

# bank check to see if this is the main thread of execution if __name__ == '__main__': 	# construct the argument parser and parse command line arguments 	ap = argparse.ArgumentParser() 	ap.add_argument("-i", "--ip", type=str, required=True, 		help="ip address of the device") 	ap.add_argument("-o", "--port", type=int, required=Truthful, 		aid="ephemeral port number of the server (1024 to 65535)") 	ap.add_argument("-f", "--frame-count", blazon=int, default=32, 		help="# of frames used to construct the background model") 	args = vars(ap.parse_args())  	# kickoff a thread that will perform motion detection 	t = threading.Thread(target=detect_motion, args=( 		args["frame_count"],)) 	t.daemon = True 	t.first()  	# start the flask app 	app.run(host=args["ip"], port=args["port"], debug=True, 		threaded=True, use_reloader=Simulated)  # release the video stream pointer vs.stop()          

Lines 118-125 handle parsing our control line arguments.

We need 3 arguments here, including:

  • --ip: The IP accost of the organisation you lot are launching the webstream.py file from.
  • --port: The port number that the Flask app will run on (y'all'll typically supply a value of 8000 for this parameter).
  • --frame-count: The number of frames used to accrue and build the background model before motion detection is performed. By default, we utilise 32 frames to build the groundwork model.

Lines 128-131 launch a thread that will exist used to perform motion detection.

Using a thread ensures the detect_motion office can safely run in the background — information technology will be constantly running and updating our outputFrame and then we can serve whatever motility detection results to our clients.

Finally, Lines 134 and 135 launches the Flask app itself.

The HTML page structure

Equally we saw in webstreaming.py, we are rendering an HTML template named alphabetize.html.

The template itself is populated by the Flask spider web framework and so served to the web browser.

Your web browser then takes the generated HTML and renders it to your screen.

Allow's inspect the contents of our alphabetize.html file:

<html>   <head>     <title>Pi Video Surveillance</title>   </head>   <torso>     <h1>Pi Video Surveillance</h1>     <img src="{{ url_for('video_feed') }}">   </body> </html>          

As nosotros can see, this is super basic web folio; however, pay close attending to Line 7 — detect how we are instructing Flask to dynamically return the URL of our video_feed route.

Since the video_feed role is responsible for serving upwards frames from our webcam, the src of the image will be automatically populated with our output frames.

Our web browser is and so smart enough to properly return the webpage and serve up the live video stream.

Putting the pieces together

Now that we've coded up our project, let's put information technology to the test.

Open up a terminal and execute the following command:

$ python webstreaming.py --ip 0.0.0.0 --port 8000  * Serving Flask app "webstreaming" (lazy loading)  * Environment: production    WARNING: This is a development server. Do non employ it in a production deployment.    Use a production WSGI server instead.  * Debug mode: on  * Running on http://0.0.0.0:8000/ (Press CTRL+C to quit) 127.0.0.one - - [26/Aug/2019 xiv:43:23] "Become / HTTP/ane.ane" 200 - 127.0.0.1 - - [26/Aug/2019 fourteen:43:23] "GET /video_feed HTTP/1.1" 200 - 127.0.0.ane - - [26/Aug/2019 14:43:24] "GET /favicon.ico HTTP/1.ane" 404 -          

Every bit y'all can see in the video, I opened connections to the Flask/OpenCV server from multiple browsers, each with multiple tabs. I even pulled out my iPhone and opened a few connections from in that location. The server didn't skip a trounce and continued to serve upwards frames reliably with Flask and OpenCV.

Using OpenCV to stream video with other web frameworks

In this tutorial, you learned how to stream video from a webcam to a browser window using Python's Flask web framework.

Flask is arguably i of the about easy-to-use, lightweight Python spider web frameworks, and while there are many, many alternatives to build websites with Python, the other super framework you may want to use is Django.

Information technology definitely takes a chip more code to build websites in Django, but it also includes features that Flask does not, making it a potentially better option for larger production websites.

Nosotros didn't cover Django hither today, but if y'all're interested in using Django instead of Flask, be sure to check out this thread on StackOverflow.

Alternative methods for video streaming

Figure 4: The ImageZMQ library is designed for streaming video efficiently over a network. Information technology is a Python packet and integrates with OpenCV.

If y'all're interested in other video stream options with OpenCV, my get-go suggestion would exist to use ImageZMQ.

ImageZMQ was created past PyImageSearch reader, Jeff Bass. The library is designed to pass video frames, from multiple cameras, across a network in real-time.

Unlike RTSP or GStreamer, both of which tin can be a pain to configure, ImageZMQ is super piece of cake to apply and is very reliable thanks to the underlying ZMQ message passing library.

If yous need a method to reliably stream video, potentially from multiple sources, ImageZMQ is the route I'd recommend.

What's next? I recommend PyImageSearch University.

Form information:
35+ total classes • 39h 44m video • Last updated: April 2022
★★★★★ 4.84 (128 Ratings) • iii,000+ Students Enrolled

I strongly believe that if you lot had the right instructor yous could master computer vision and deep learning.

Practice y'all remember learning estimator vision and deep learning has to exist fourth dimension-consuming, overwhelming, and complicated? Or has to involve complex mathematics and equations? Or requires a degree in computer science?

That's not the instance.

All you need to principal calculator vision and deep learning is for someone to explain things to you in uncomplicated, intuitive terms. And that's exactly what I exercise. My mission is to change teaching and how complex Bogus Intelligence topics are taught.

If yous're serious almost learning calculator vision, your side by side end should be PyImageSearch University, the near comprehensive computer vision, deep learning, and OpenCV course online today. Here you'll learn how to successfully and confidently use computer vision to your work, enquiry, and projects. Join me in computer vision mastery.

Inside PyImageSearch Academy y'all'll find:

  • 35+ courses on essential computer vision, deep learning, and OpenCV topics
  • 35+ Certificates of Completion
  • 39+ hours of on-demand video
  • Brand new courses released regularly , ensuring you can keep up with state-of-the-fine art techniques
  • Pre-configured Jupyter Notebooks in Google Colab
  • ✓ Run all lawmaking examples in your web browser — works on Windows, macOS, and Linux (no dev environment configuration required!)
  • ✓ Admission to centralized lawmaking repos for all 450+ tutorials on PyImageSearch
  • Easy i-click downloads for code, datasets, pre-trained models, etc.
  • Admission on mobile, laptop, desktop, etc.

Click here to join PyImageSearch Academy

Summary

In this tutorial yous learned how to stream frames from a server automobile to a client web browser. Using this spider web streaming we were able to build a basic security application to monitor a room of our house for motion.

Background subtraction is an extremely common method utilized in calculator vision. Typically, these algorithms are computationally efficient, making them suitable for resource-constrained devices, such as the Raspberry Pi.

After implementing our background subtractor, we combined it with the Flask web framework, enabling us to:

  1. Admission frames from RPi photographic camera module/USB webcam.
  2. Apply background subtraction/motion detection to each frame.
  3. Stream the results to a web page/web browser.

Furthermore, our implementation supports multiple clients, browsers, or tabs — something that you will not find in most other implementations.

Whenever you demand to stream frames from a device to a web browser, definitely use this code as a template/starting point.

To download the source code to this post, and be notified when time to come posts are published hither on PyImageSearch, just enter your electronic mail address in the form beneath!

Download the Source Code and Costless 17-page Resource Guide

Enter your e-mail accost below to go a .zip of the lawmaking and a FREE 17-page Resources Guide on Computer Vision, OpenCV, and Deep Learning. Inside y'all'll find my hand-picked tutorials, books, courses, and libraries to help yous master CV and DL!

caseydifeent.blogspot.com

Source: https://pyimagesearch.com/2019/09/02/opencv-stream-video-to-web-browser-html-page/

0 Response to "This Project Cannot Be Uploaded While Refrecing Videos From the Camera"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel