Html5 Chrome



Introduction

Detect support for HTML5 features like Canvas, Border Radius, CSS Animations & Transforms, 3D & WebGL, LocalStorage, HTML5 Video and many many more for.

The HTML5 test score is an indication of how well your browser supports the upcoming HTML5 standard and related specifications. How well does your browser support HTML5? Following the lead, Google Chrome and other browsers made the switch to HTML5 that same year, which put the last nail in the coffin. After 2015 browsers started blocking Flash content from playing by default. Discover great apps, games, extensions and themes for Google Chrome.

The ability to capture audio and video has been the Holy Grail of web development for a long time.For many years, you had to rely on browser plugins (Flash orSilverlight)to get the job done. Come on!

HTML5 to the rescue. It might not be apparent, but the rise of HTML5 has broughta surge of access to device hardware. Geolocation (GPS),the Orientation API (accelerometer), WebGL (GPU),and the Web Audio API (audio hardware) are perfect examples. These featuresare ridiculously powerful and expose high-level JavaScript APIs that siton top of the system's foundational hardware capabilities.

This tutorial introduces navigator.mediaDevices.getUserMedia(), which allowsweb apps to access a user's camera and microphone.

The road to the getUserMedia() API

If you're not aware of its history, the road to the getUserMedia() API is an interesting tale.

Several variants of media-capture APIs evolved over the past few years.Many folks recognized the need to access native devices on the web, butthat led many people to propose a new spec. Things gotso messy that the W3C finally decided to form a working group. Their sole purpose?Make sense of the madness! The Devices and Sensors Working Grouphas been tasked to consolidate and standardize the plethora of proposals.

Here's a summary of what happened in 2011.

Round 1: HTML Media Capture

HTML Media Capture was the group's first go atstandardizing media capture on the web. It overloads the <input type='file'>and adds new values for the accept parameter.

If you want to let users take a snapshot of themselves with the webcam,that's possible with capture=camera:

Pretty nice, right? Semantically, it makes a lot of sense. Where this particular API falls short is the ability to do real-time effects, such as render live webcam data to a <canvas> and apply WebGL filters.HTML Media Capture only allows you to record a media file or take a snapshot in time.

Support

  • Android 3.0 browser—one of the first implementations. Check out this video to see it in action.
  • Google Chrome for Android (0.16)
  • Firefox Mobile 10.0
  • iOS 6 Safari and Chrome (partial support)

Round 2: Device element

Many thought HTML Media Capture was too limited, so a new specemerged that supported any type of (future) device. Not surprisingly, the design calledfor a new element, the <device> element,which became the predecessor to getUserMedia().

Opera was among the first browsers to create initial implementationsof video capture based on the <device> element. Soon after(the same day to be precise),the WhatWG decided to scrap the <device> tag in favor of another up and comer, this time a JavaScript API callednavigator.getUserMedia(). A week later, Opera put out new builds that includedsupport for the updated getUserMedia() spec. Later that year,Microsoft joined the party by releasing a Lab for IE9supporting the new spec.

Here's what <device> would have looked like:

Support:

Unfortunately, no released browser ever included <device>.One less API to worry about. <device> did have two great things goingfor it, though:

  • It was semantic.
  • It was easily extensible to support more than audio and video devices.

Take a breath. This stuff moves fast!

Round 3: WebRTC

The <device> element eventually went the way of the dodo.

The pace to find a suitable capture API accelerated thanks to the larger WebRTC (web real-time communication) effort. That spec is overseen by the Web Real-Time Communications Working Group. Google, Opera, Mozilla, and a few others haveimplementations.

getUserMedia() is related to WebRTC because it's the gateway into that set of APIs.It provides the means to access the user's local camera and microphone stream.

Support:

getUserMedia() has been available since Chrome 21, Opera 18, and Firefox 17. Support was initially provided by the Navigator.getUserMedia() method, but this has been deprecated.

You should now use the navigator.mediaDevices.getUserMedia() method, which is widely supported.

Get started

With getUserMedia(), you can finally tap into webcam and microphone input without a plugin.Camera access is now a call away, not an install away. It's baked directly into the browser. Excited yet?

Feature detection

Feature detection is a simple check for the existence of navigator.mediaDevices.getUserMedia:

Gain access to an input device

Html5 Chrome

To use the webcam or microphone, you need to request permission.The parameter to getUserMedia() is an object specifying the details andrequirements for each type of media you want to access. For example, if you want to access the webcam, the parameter should be {video: true}. To use both the microphone and camera,pass {video: true, audio: true}:

Okay. So what's going on here? Media capture is a perfect example of HTML5 APIs working together. It works in conjunction with your other HTML5 buddies, <audio> and <video>.Notice that you don't set a src attribute or include <source> elementson the <video> element. Instead of the URL of a media file, you give the video a MediaStream from the webcam.

Chrome

You also tell the <video> to autoplay, otherwise it would be frozen onthe first frame. The addition of controls also works as you'd expect.

Set media constraints (resolution, height, and width)

The parameter to getUserMedia() can also be used to specify more requirements(or constraints) on the returned media stream. For example, instead of only basic access to video (for example, {video: true}), you can additionally require the streamto be HD:

If the resolution isn't supported by the currently selected camera, getUserMedia()is rejected with an OverconstrainedError and the user isn't prompted to grant permission to access their camera.

For more configurations, see the Constraints API.

Select a media source

Html5 Chrome Extension

The navigator.mediaDevices.enumerateDevices() method provides information about available input and output devices, and makes it possible to select a camera or microphone. (The MediaStreamTrack.getSources() API has been deprecated.)

This example enables the user to choose an audio and video source:

Check out Sam Dutton's great demo of how to let users select the media source.

Security

getUserMedia() can only be called from an HTTPS URL or localhost. Otherwise, the promise from the call is rejected. getUserMedia() also doesn't work for cross-origin calls from iframes. For more information, see Deprecating Permissions in Cross-Origin Iframes.

Html5 chrome web store

All browsers generate an infobar upon the call to getUserMedia(), which gives users the option to grant or deny access to their cameras or microphones. Here's the permission dialog from Chrome:

This permission is persistent. That is, users don't have to grant or deny access every time. If users change their mind later, they can update their camera access options per origin from the browser settings.

The MediaStreamTrack actively uses the camera, which takes resources and keeps the camera open (and camera light on). When you no longer use a track, call track.stop() so that the camera can be closed.

Basic demo

Take screenshots

The <canvas> API's ctx.drawImage(video, 0, 0) method makes it trivial to draw<video> frames to <canvas>. Of course, now that you have videoinput through getUserMedia(), it's just as easy to create a photo-booth appwith real-time video:

Apply effects

CSS Filters

With CSS filters, you can apply some gnarly effects to the <video>as it is captured:

WebGL textures

Html5 Chrome Extension

One amazing use case for video capture is to render live input as a WebGL texture.Give Jerome Etienne's tutorialand demo a look.It talks about how to use getUserMedia() and Three.jsto render live video into WebGL.

Use the getUserMedia API with the Web Audio API

Chrome supports live microphone input from getUserMedia() to the Web Audio API for real-time effects. It looks like this:

Demos:

Html5

For more information, see Chris Wilson's post.

Conclusion

Historically, device access on the web has been a tough nut to crack. Many tried, few succeeded. Most of the early ideas never gained widespread adoption or took hold outside of a proprietary environment. Perhaps the main problem has been that the web's security model is very different from the native world. In particular, you probably don't want every Joe Shmoe website to have random access to your video camera or microphone. It's been difficult to get right.

Since then, driven by the increasingly ubiquitous capabilities of mobile devices, the web has begun to provide much richer functionality. You now have APIs to take photos and control camera settings, record audio and video, and access other types of sensor data, such as location, motion, and device orientation. The Generic Sensor framework ties all this together, alongside generic APIs to enable web apps to access USB and interact with Bluetooth devices.

Html5 Chrome Support

getUserMedia() was only the first wave of hardware interactivity.

Additional resources

How Do I Install Html5

  • Bruce Lawson's HTML5Doctor article
  • Bruce Lawson's dev.opera.com article

Demos

How To Use Html5 In Chrome

  • WebRTC samples: Canonical demos and code repository
  • Paul Neave's WebGL camera effects