Last week I posted the first article in my series on writing your first WebRTC application. In it, I explained the high level aspects of a WebRTC signaling server along with the client-side components. At the risk of repeating myself, here are the most important concepts.
- A WebRTC solution consists of two parts – the code that runs in the web browser and the signaling server.
- The web browser application will consist of HTML and JavaScript.
- HTML will be used for user input and page display.
- JavaScript will be used for communication to the signaling server and WebRTC function calls.
- WebRTC requires a signaling server, but gives you a lot of leeway as to what it is.
Today, I want to spend a little more time on the client-side code. Specifically, I want to write about how different web browsers have chosen to implement WebRTC.
Because programming is half science and half art, there are a multitude of ways to solve the same problem. I won’t claim that my way is the best way, but there is a rhyme to my reason and my approach gets the job done.
Companion articles that you may find useful:
Writing Your First WebRTC Application: Part One
The Unfortunate State of WebRTC
There are countless articles on the web about how WebRTC allows you to create and manage multimedia calls to and from web browsers. While that is certainly true, you might not have read that the WebRTC specification is still very much a work in progress and different web browsers put their own twists on it. What might work in one web browser won’t necessarily work in another.
My experience has been limited to Chrome and Firefox and both have a number of differences that need to be accounted for. WebRTC also runs in Opera, but I have yet to tackle that browser.
Chrome Browser Settings
It is important to make sure that WebRTC is enabled within Chrome. You do this by opening Chrome and navigating to:
chrome://flags
Search for the string “WebRTC” and ensure “WebRTC device enumeration” is enabled.
There are a number of additional WebRTC settings that apply only to the Android operating system. If you use Android for your application, ensure that those settings have been enabled, too. So far, I have done all my work on a Windows 8 PC.
Note, as of the writing of this article, the current version of Chrome is 36.0.1985.143 m. Google may choose to change any of what I am writing in the future. Pay attention as this is a very fluid subject.
Firefox BrowserSettings
Unlike Chrome, there are no settings to enable or disable WebRTC functions for Firefox. Of course, this may change over time. As I explained with Chrome, you need to pay attention.
API differences
For reasons unclear to me, Chrome and Firefox have chosen to use different names for the same WebRTC APIs and objects. This is especially true with Firefox which appears to have renamed most of them. You aren’t off the hook with Chrome, though. There are simply fewer changes to make.
I would highly recommend that you create a wrapper that looks to see which browser your application is running on and override the WebRTC function names as necessary.
You can check to see which browser you are on and set the function names with the following JavaScript code.
if (navigator.mozGetUserMedia) {
// Firefox specific code
RTCPeerConnection = mozRTCPeerConnection;
RTCSessionDescription = mozRTCSessionDescription;
RTCIceCandidate = mozRTCIceCandidate;
getUserMedia = navigator.mozGetUserMedia.bind(navigator);
} else if (navigator.webkitGetUserMedia) {
// Chrome specific code
RTCPeerConnection = webkitRTCPeerConnection;
getUserMedia = navigator.webkitGetUserMedia.bind(navigator);
}
Now, whenever you call one of these overridden WebRTC functions, it won’t matter which browser you are on.
I highly recommend that you invoke this wrapper code immediately after your HTML code lays out the page. For example, let’s name the wrapper function initAdapter() and call it from the function onPageLoad().
<body onLoad=”onPageLoad();”>
function onPageLoad() {
.
.
.
initAdapter()
}
Stream Management
In addition to overriding the function names, Chrome and Firefox are different in how they handle media streams. To solve that, create generic functions that are specific to each browser type.
In a future article, I will put these functions to use. For now, though, I will simply define them.
For Firefox:
attachMediaStream =
function(element, stream) {
element.mozSrcObject = stream;
element.play();
};
reattachMediaStream =
function(to, from) {
to.mozSrcObject = from.mozSrcObject;
to.play();
};
if (!MediaStream.prototype.getVideoTracks) {
MediaStream.prototype.getVideoTracks = function() {
return [];
};
};
if (!MediaStream.prototype.getAudioTracks) {
MediaStream.prototype.getAudioTracks = function() {
return [];
};
};
For Chrome:
attachMediaStream =
function(element, stream) {
element.src = webkitURL.createObjectURL(stream);
};
reattachMediaStream =
function(to, from) {
to.src = from.src;
};
if (!webkitMediaStream.prototype.getVideoTracks) {
webkitMediaStream.prototype.getVideoTracks =
function() {
return this.videoTracks;
};
webkitMediaStream.prototype.getAudioTracks =
function() {
return this.audioTracks;
};
}
if (!webkitRTCPeerConnection.prototype.getLocalStreams) {
webkitRTCPeerConnection.prototype.getLocalStreams =
function() {
return this.localStreams;
};
webkitRTCPeerConnection.prototype.getRemoteStreams =
function() {
return this.remoteStreams;
};
}
Let’s Call it a Day
I was told that you should always leave ‘em wanting more, so I am going to stop here. In summary:
- You need to ensure that WebRTC has been enabled for the web browser.
- Firefox and Chrome use different names for the same WebRTC functions. Create a wrapper to override the browser specific names.
In my next installment, I will start to use the function calls to create the WebRTC application. I hope you come back for more because this is where it really starts to get fun.