Chrome in 2015 supported the arrows marked in black and orange, which give some options but are very limited:
getUserMedia(): allows for capturing live video/audio but provides no access to the data, its timing information or its metadata; needs a sink (
<video/audio>) which essentially provides no access to any of that information either.
<video>and then through a
<canvas>to download the pixels via its 2D/3D context (WebGL), incurs in tremendous time and CPU penalties due to the round trip(s) to the GPU.
WebAudio: tremendously flexible, but can't record, and is only good for, well, audio.
<input capture>, which is not in the diagram, is widely implemented on mobile devices, just dumps you to the system capture/image picker
So I started working on the Red, Green and Blue arrows. The biggest win is Media Recorder (red arrow), because it allows transforming online live content into offline content, simply put. This is huge because it makes use cases such as gaming (think Twitch, YouTube Live), cam apps (Snapchat, Instagram), screencasts, broadcasting etc feasible on the Web. (I say feasible because without this API it might be possible, but frankly, it would be slow as hell and inviable for things like, having users beyond your dormmates). Developers, that know a lot about their businesses, requested this feature en-masse: the original bug tracker in Chrome was the most starred feature ever in Chromium yet it had been dangling with no one really doing anything about it. Why? Well, mostly because it's fecking hard to write code for Real-Time media encoding, and is hard to get Specs right; takes a thick skin to make any progress in any of those two realms, let alone in both. Also Org-wise this was internally ascribed to the Cr-WebRTC realm for the simple reason that the Spec stems from the WebRTC W3C working group, but it was down-prioritized in favour of the black and orange arrows.
MediaRecorder work started in the second half of 2015, was shipped in M47 and has been still active due to the hardware encoders that also needed to be carefully wired (because they crash and they take down the whole browser, which is a no-no). But wait! Wouldn't it be cool to be able to record videos as they play back, or my latest cool WebGL game? Yeah, that's why firstname.lastname@example.org and I landed stream capture from <canvas/video/audio> elements in Chrome, which allows you to plug those elements into a MediaRecorder. Boom! Building blocks that come together!
The next missing item was how to take pictures with full resolution and manipulate the photo-specific settings (think zoom, or flash), which, surprise surprise, were not surfaced to the Web at all before. This is the W3C Image Capture Spec and it has been implemented and shipped as an experiment in M56-57-58, so we're collecting data to ship it fully.
Yeah I did do interesting stuff before joining G (I guess previous employers wouldn't be thrilled if I was to give too many details so I'll keep it fuzzy). My previous employer was Alcatel-Lucent-Bell Labs in Antwerpen, Belgium, same place were (mostly) the same guys developed and shipped a bazillion DSL boxes giving service to a ton of homes all around the world. In particular I worked in the DSL aggregator IP (the whole thing is marketed as ISAM) which is the secret sauce of where all DSL or PON customer lines get bundled into a massive chunk of uplinks of about ~500GBps (usually splitted into a number of 10/40 or even 100GB ethernet over fiber). ALu is a great place to do network equipment and, well, not too good a place for anything else.
Also in Belgium (with trips to Noordwijk in the Netherlands) I worked for this contractor of the European Space Agency that got me to deal with two systems; one was a VR for astronaut experiments, measuring the response time to different audiovisual stimuli, where I did the real time parts (basically a pace maker sucking managing FIFOs for sensor inputs and actuator outputs, all using RTX for Win - sig). The other project was an attitude and heading tracking to support astronauts onboard the ISS (this one eventually made it up there!); in here I did the Kalman filter formulation for all sensor integration (those days accelerometers and gyros were not integrated and was quite a challenge to solder them, let alone model it).
Immediately before that I worked in Spain for a few years in Real Time Linux systems for trains, more concretely for train-ground communication and a bit on train controllers. That was a ton of fun, dealing with those tiny RTAIs (later Xenomais) bridging GRPS and GSMR to the MVB (a Real-Time serial bus on the train from where all systems hang, including engines and brakes!). Moreover it's heart warming to see those systems in operation all around the world carrying tiny pieces of me :)
Even before that I finished a M.Sc. in CompSci in UCD (Ireland) on power consumption estimation for software running on embedded devices, which was a quite interesting research that lead to a few publications here and there.