Mixing Device Cameras and the Web
During this article there will be several examples that require access to your camera. This API is a web standard, and the demos in this article will only have access to video streams if you opt in and allow access. Camera data is only available/visible to you and nothing is saved by this article or its demos.
We are fairly used to the apps on our mobile devices and computers to have access to cameras and microphones for grabbing a selfie or joining a video call. These same media inputs are accessible to our web applications via the Media Devices API. We will be focusing on video input specifically to explore how this API can be used creatively with other web technology.
To follow along with the demo, please view this article on a device with a camera and allow access.
Demo
The quick path to loading a video stream
At its core, use of the camera consists of an API call to access the device camera’s video stream and an HTML element (such as video
or canvas
) to send the stream for viewing.
<video playsinline muted autoplay></video>
let stream;
if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
stream = await navigator.mediaDevices.getUserMedia({
video: true,
audio: false
});
const vid = document.querySelector('video');
vid.srcObject = stream;
vid.onloadedmetadata = () => {
vid.play();
};
}
Let’s break down each step of that example
- We create an empty video element (with a few attributes to guide the browser that our video will be friendly to a user, such as not use sound to start)
- In the JavaScript, we first check for presence of the Media Devices API and its
getUserMedia
method. This is the primary method we deal with when working with video. It also requireshttps
, so sites on plainhttp
will also not have access to this API. Feature detection is key with all things Media Devices, as every OS + browser + camera combination will be (at least a little) different. - Once we know there is support, we ask the user if we can stream data from their device by calling
getUserMedia
with an options object that tells specifically what we are requesting. In this case we are telling the browser we want the default video stream that it can find, but we do not want any microphone access. - This method returns a Promise, and if the user allows access, we
- Receive the video stream
- Set the video stream as our
video
element’s source - Play the stream once it is loaded and ready
As this is permission based, a user can always reject permission. The user should be given enough information about why video access is being requested before it happens so they can make the best choice for them.
What can I do with this API?
There are straightforward reasons why media device access made it to the web, such as it enables videoconferencing. Access to the microphone also opens up possibilities with Speech Recognition. There are even ways to enable screen sharing through a similar Media Devices API (getDisplayMedia()
).
We can get as creative as we want by not just taking video at face value but thinking about it as an input to be combined with everything else that HTML, CSS, and JavaScript give us. People have combined JavaScript with video to create virtual Theremins and color-based music makers. CSS and Canvas open the doors to manipulating the appearance of the video through filters, blend modes, clip paths, and more.
Demo
Getting more specific with a device’s capabilities and constraints
Feature detection is always key with accessing video, as not every device/browser/camera will have the same capabilities. There are many different combinations that drive the feature set of your specific video stream we are requesting, and sometimes we will want our video stream to meet specific criteria to be useful.
In our earlier example, we asked if we have access to any video stream by passing { video: true }
into the getUserMedia()
method. This object we pass in represents our Constraints. With { video: true }
we are telling the system to get us effectively any video stream, so this is the loosest constraint we can provide. There will be a default camera with default settings, and that is what we get.
However, we can start making more specific requests. As the developer, if I’d prefer to load a user-facing (selfie) camera, I can tell the API in my constraints object.
navigator.mediaDevices.getUserMedia({
video: { facingMode: "user" }
})
Alternatively, I could also prefer a camera on the back side of a device, where I could set { facingMode: "environment" }
. The key is that so far these are all preferences. If I request this back camera, but I am on a laptop that only has a front facing camera, then the API will play nice and still give me a video stream from the best available camera.
Other common constraints that can be available are frameRate
, aspectRatio
, and height
/width
.
Demo
When we need to apply more restrictions, we can tell the API that we need specific values, by setting ranges or requesting an exact match.
navigator.mediaDevices.getUserMedia({
video: {
facingMode: { exact: "environment" },
aspectRatio: { min: 1, max: 1.7777777778 }
}
})
These example constraints will require a camera that faces away from the user and has a stream that is least a square size and no more than a 16:9 ratio. If there is no camera that can support these requirements, no camera is returned and the getUserMedia
Promise is rejected with a MediaStreamError
.
Responsible usage
As makers of the web, we have a responsibility to use these abilities well. We need to be clear and honest up front with how these inputs are used by us and what we build, and we need to empower the user to opt out at any point.
Once a video stream is loaded and playing, we no longer can set our constraints for the video via getUserMedia()
. We need to store the video stream and adjust an individual video track within it.
In our starting example, we have a variable named stream
that stores the stream (and its corresponding video track) resolved by the getUserMedia()
Promise. We can act on this individual track at any point — such as stopping the video stream on a button press.
stopButton.addEventListener('click', e => {
// We requested one video and no audio, so there is only one active track
const track = stream.getTracks()[0];
track.stop();
})
This turns the camera off (as you can see if your device has a light indicator for the camera), but the video element might remain on the last captured screen. To assure the user camera access has stopped, we should also set the video.srcObject = null
and remove the stream from our visible video element.
Demo
To see the constraints (as we currently requested them) on a given track we can call track.getConstraints()
and we will see an object that matches the one we passed in to getUserMedia
.
To see what Settings actually were applied (that is, which of our preferences became real), we can check track.getSettings()
.
If we want to have better info around what other Capabilities are available to our active stream, we can see the ranges and possible values via track.getCapabilities()
. However, this is one of the few pieces that is not in every major browser yet as it is not in Firefox as of this writing.
Demo
getConstraints()
Unknown
getCapabilities()
Unknown
getSettings()
Unknown
Finally, to apply new constraints we can pass in a new constraints object to track.applyConstraints()
. This is a full overwrite, so if we previously had specified a aspectRatio
and on this new update we only set a frameRate
, the old value for the aspectRatio
will be forgotten and its default value will be in effect.
Our constraints object can even pass in a specific deviceId
if we know how the system refers to a specific camera. Realistically we will not be able to know that in most cases ahead of time, but there is another Media Devices API we can use called enumerateDevices
. Calling this will give us a list of all devices available, with the default ones listed first. This method, however, does not require user permission first, and if you call this method before allowing access via getUserMedia
you will get a subset of available information.
[{
"deviceId": "abc",
"kind": "videoinput",
"label": "", // This will be blank until user has granted camera permission
"groupId": "xyz"
}]
If called after camera access is allowed, we get a slightly different result where the label is filled in.
[{
"deviceId": "abc",
"kind": "videoinput",
"label": "Front Camera",
"groupId": "xyz"
}]
With this information, it’s possible to have a button or dropdown that allows you to switch between cameras and change video streams by passing in the appropriate deviceId
in as a constraint.
stream = await navigator.mediaDevices.getUserMedia({
video: {deviceId: "abc"},
audio: false
});
Demo
The browser still provides control
Now that we have been responsible and provided many options for our users to opt out or limit their camera usage, we must acknowledge that browsers also provide a lot of options to the user. Just as browsers have introduced ways to mute tabs playing audio in the background, they also (to varying levels) give users control over their camera and microphone usage. Ways to revoke access for all permissions-based methods (such as cameras and location services) have become prominent. So we always need to account for the fact that a user can remove access at any point.
Some browsers let you pause and play an active stream’s track without revoking access, and some even allow you to change the active camera.
Get creative
Have fun, be creative, and empower users to be creative and in control.