There is something you need to know about me. When I get excited about something, I tend to get really excited. Case in point is WebRTC. I’ve seen a lot of amazing ideas, inventions, and products in the field of communications, but the notion of turning a web browser into a full featured multimedia communications device is truly revolutionary.
So, with this, my fourth blog about WebRTC in a relatively short period of time, I hope to shed a little more light on this thing that has truly captured my attention.
Let’s get started.
In case you didn’t catch this from my previous blog articles, WebRTC does not define a signaling protocol. This is not because the developers were lazy or simply forgot that you need to signal another device before you can send media to it. Signaling was left out of WebRTC for some very good reasons.
- Different applications may require/prefer different protocols. The WebRTC working group did not want to lock it down to something that may turn out to be inadequate for all its uses.
- WebRTC runs in a web browser and support for signaling would require that web pages would need to be stateful. This becomes problematic if signaling is lost each time a page is reloaded.
Since signaling is required for call setup, WebRTC solutions must include a signaling server of some type. Again, WebRTC itself doesn’t care how that sever implements signaling, but it must exist somewhere in the network.
However, it isn’t exactly a signaling free-for-all. WebRTC requires that that server understands Session Description Protocol (SDP). That’s right, the same protocol that SIP uses for its media connections.
Don’t think that this means that WebRTC servers automatically use SIP for signaling. SDP is a protocol unto itself and can be used by any signaling protocol to define, advertise, and in some regards, negotiate multimedia capabilities between peers.
At the same time, don’t think that WebRTC can’t use SIP for signaling. All the client cares about is that it can send SDP to something and that something signals the far-end. The client doesn’t give a hoot about what happens in the middle as long as the far-end client receives SDP and sends SDP back.
If you have any questions about what SDP is and how it works its magic, please see my article Understanding Session Description Protocol (SDP)
Step by Step
- Andrew creates an offer that contains his local SDP.
- Andrew attaches that offer to something known as an RTCPeerConnection object.
- Andrew sends his offer to the signaling server using WebSocket. WebSocket is a protocol that provides a full-duplex communications channel over a network connection. WebRTC standardized on WebSocket as the way to send information from a web browser to the signaling server and vice versa.
- Linda receives Andrew’s offer using WebSocket.
- Linda creates an answer containing her local SDP.
- Linda attaches her answer, along with Andrew’s offer, to her own RTCPeerConnection object.
- Linda returns her answer to the signaling server using WebSocket.
- Andrew receives Linda’s offer using WebSocket.
I am saving explanations of ICE, STUN, and TURN for a future article, but with this simple flow, the users have shared their SDP and understand who is capable of what.
Keeping it Simple
This should be enough for now to convey the following points.
- WebRTC does not define signaling.
- WebRTC uses SDP to define the media characteristics of a call.
- A signaling server sits between two clients..
- Clients use WebSocket to communicate to a signaling server and vice versa.
Good enough? Great! Stay tuned for future articles on my latest passion. I still have a lot more to say on this exciting subject.