Live 360°

An interview with David Robinson, Executive Producer, Total Media VR (TMVR)


1. What kind of solutions do you offer?

Our business model enables us to react to specific needs and create custom solutions for different engagements. As engineers, we favour finding the best path for our clients – whether that means using available resources or developing new technology whilst integrating with our existing production footprint. We’ve also focused on creating one umbrella product platform that we can enhance, market and reliably deliver, time and time again.

We swap in and out a selection of current market camera technology, favouring systems that complement one another. This allows us to create multiple camera solutions that can be connected and controlled in a traditional broadcast environment. We need to be able to guarantee quality and reliability so that they can integrate with our existing hardware to utilize existing architecture and infrastructure. From there, we can offer a product that is familiar to any client’s workflow.

 

camera-dev
David Robinson, Total Media VR

 


2. Who is most interested in 360° / virtual reality (VR) and Live 360°?

Our clients come from music, entertainment, sports, corporate and advertising agencies. They have all shown growing interest in 360° live video, VR and augmented reality (AR). Our aim has been to fuel their imagination and provide a broad array of new solutions from our partner group to offer to their clients.

We’re also expanding into new areas through a strong relationship in the pharmaceutical sector. We’re working on developing hardware for a major brand in 2018 to focus on expanding the traditional content creation landscape in their space.

chopper-dev

 


3. Is 2D obsolete if you capture in 3D? Is there a way to integrate 2D assets into 360° / VR?

No, far from it – 2D is not obsolete. In 2017 Grammy award winner Christian McBride invited us to shoot the last night of his concert series in New York at the famous Village Vanguard. Working with GraysonX and HEAR360, we deployed both 2D and 360° cameras in the space combined with HEAR360’s 8ball microphone. The process brought together a 360° background embedded with close up angles from our 2D cameras as picture-in-picture assets. With the spatial audio from HEAR360 underpinning the visual immersion, we were able to deliver a rapid real-time deployment of an augmented experience. Joining the two mediums in a collaborative deliverable added a completely new production perspective to the value of the piece.


4. Can you get the same picture quality in 360° / VR as you do with a traditional broadcast?

It depends on the cameras, location and all the factors that affect capture. The quality of the camera technology has a bearing, together with available light and position. If you’re shooting a TV show, you’re able to paint and shade cameras to match – 360° cameras are no different. In the post-world, there are tools that allow you to manage colour correction and augmented quality.

When you’re working in live production, you’re trying to achieve something that is going to be instantly enjoyable and reflects a consistency without disturbing the visual experience. That’s something we are very familiar with. We have explored ways to manage consistency reliably and work with camera technology that compliment the work-flow.

 

broadcast


5. What are the challenges and / or limitations of working in 360°?

The challenges are reflected by the infancy of the market, the ever-changing hardware, technology and appetite. It’s more about whether you can work with 360° capture reliably and consistently. We are there to ensure everything goes seamlessly and smoothly. The general interest and perception of 360° is all about the head mounted view. This becomes a limitation and in some cases a detractor. It’s forcing companies like ours to push to change perception and move users to engage with content differently. Instead of locking them into their own world, we take the experience to giant interactive screens, domes projections, on to mobile devices inside a social media stream or over-captured into a traditional 16×9 frame.


6. Can you do 360° / VR broadcasts or Live 360° anywhere, or do you need a special broadcast facility?

Yes, we’ve produced broadcasts from very obscure locations. The general rule of thumb is to retain the quality expected of the production and the broadcast. When you start shooting in 360°, you’re increasing the number of cameras, therefore you’re increasing the actual frame size and resolution to broadcast. As a result, you’re looking to stream more data than you were previously. In the end, a lot of your simpler solutions start to fall away. What we’ve done is to start looking at ways we can push higher resolution via differing mechanisms and processes. In order to hit these designations, we’ve built dedicated fly-packs and portable, more manageable form factors. Anything from a full-size outside broadcast (OB) truck down to a 5K multi-capture system that can travel in the overhead compartment of a domestic or international flight. We can broadcast VR / 360° from anywhere as long as we can source the kind of bandwidth to meet the quality expectations of the live production.

forest-shoot

 


7. Brands want to share content via social media channels, because that’s where they frequently engage with their audience. How can you plug in and stream to popular social networks like Facebook and YouTube?

Our team members have had a long relationship with Facebook through their client, Telescope. As an engineering and production partner, Total Media VR (TMVR) support a large number of their live streaming needs. Amongst its many firsts and award-winning engagements, Telescope invented the “Donate Button” for Facebook, coding the API that Facebook uses across all of its charities and philanthropic ventures streamed online. Recently, we’ve been testing how to take a 360° live experience in 4K and stream it directly to Facebook. Working closely with the Telescope team, we successfully embedded dynamic audience interactions; moderated chat; and handled polling and event statistics direct from the Telescope environment into the 360° world of a live stream, in real-time. This example demonstrates how social media plays an important role in bringing the familiar engagements into the users stream and utilising the 360° environment to create a more compelling call to action with additional augmented content.


8. Can you stream or broadcast to all the social networks at once?

We can use a number of tools at our disposal to stream simultaneously to supporting social media networks. Our encoding platforms are built to support redundant streams to single accounts or multiple streams to a number of different accounts across familiar networks such as YouTube, Periscope and Facebook.

concert

 


9. What’s the most unique thing about your offering? Any world firsts?

The most unique thing about TMVR is the way we innovate. It’s all about our eagerness to engineer and our ability to ideate. We thrive on being challenged by our clients, from the most basic request, such as hosting a small production with one or two cameras, to creating our own capture systems or customizing to meet the requirements of the creative. For example, we built a rotating frame for the director of GraysonX’s Alice Phoebe Lou project. The frame could wind up an orbiting camera mount so that as the structure moved it would set another component in motion to create a constantly spinning point of view (POV) around the central axis of the subject. In this case, the subject was the talent, as she performed the camera turned around her, creating a unique 360° experience.

Working alongside GraysonX on SESQUI’s ‘HORIZON’ film, we were the first to create wireless 360° real-time on set confidence monitoring solutions. TMVR built a wireless product that enabled the primary 360° camera array to be viewed as a stitched view, from which the director could review and direct his shots remotely in real-time.

Partnering with Telescope, we were also the first to embed dynamic moderated chat and polling data into a 360° Facebook stream. We’re also currently working on long-range wireless POV 360° camera for head-mounted 360° products in sports, entertainment and industrial applications.


10. Where do you see the 360° / VR / AR going in the future? Will everything be live?

I think audiences are becoming more familiar with 360° content. Using destinations such as Facebook in a more traditional setting, such as a phone or a tablet rather than a head-mounted display (HMD), increases the engagement and awareness. It also broadens the creative opportunity to include augmented development.

Currently, we will focus our attention on live streaming and broadcast. There are opportunities – particularly in social media – where live interaction goes further than just watching. Actual engagement in real-time delivers an increasingly more compelling user environment. For instance, simple movement in the space to engage in different aspects of a news feed creates a new opportunity to ideate. Personally, I think combining AR and 360° together and delivering it live from a social platform with audience engagement, creates the most compelling direction for the medium. TMVR is aiming to embolden this step and push to open the user experience to engage a broader interest and a more expansive call to action.


 

The Importance of Spatial Audio

An interview with Hear360’s Founders: Matt Marrin and Greg Morgenstein


1. How did you go from being engineers and producers to launching HEAR360 and creating audio technology?

Matt: Greg and I have been working in the music business for over 20 years. I started working in recording studios in Los Angeles and moved up through a traditional system. I started as a runner, taking food and drink orders, and then moved up the ranks as an assistant to the engineers in recording sessions. If you’re fortunate enough, you start working with those engineers and start producing and engineering records on your own. I met Jimmy Jam and Terry Lewis and started working with them exclusively, recording a lot of their projects – Janet Jackson among many others.

After you are in this world for a long time, people start to recognize that you have a lot of experience with sound and audio and a number of opportunities come up. That’s what happened for Greg as well. New start-ups and companies in the audio technology world are looking for experienced audio professionals that can evaluate, critique and help develop their technology.

Around 2007, Greg and I started consulting for a spatial audio company that was developing software for headphones and soundbars. That led us to testing and tuning prototypes like sound systems for cars with spatial audio. I consulted for SONOS evaluating the sound of their products when they were in early and late-stage development, and later started consulting for DTS. They were looking to expand their spatial headphone system and needed people with mixing experience to put together demos, and to evaluate and make observations on improving those systems.

Through the path of consulting and talking to companies about how to improve their products, we started coming up with our own product concepts and realized we could do something on our own. We launched HEAR360 and decided to make those concepts a reality.


2. Briefly, why is spatial audio important and how does it work?

Matt: There are different ways to experience immersive content – in a virtual reality (VR) scenario where you’re in a head-mounted display (HMD), watching something on a mobile device in 360°, or watching on your laptop in 360°. When you’re watching an immersive video you can look around, experience different aspects of it and control your experience. If the audio doesn’t have a head-tracking component then you’re not really getting a fully immersive experience. You’re only getting 50% of it. In these scenarios, in order to sell the consumer on the experience, you have to have 100% otherwise they won’t buy it.

People often overlook sound and consider it last. Most people pay attention to the visuals and think it’s the most important part. We think video is only half of the experience. Even though a user may not be able to articulate what immersive sound is or how important it is, they instantly recognize when they hear something that makes them feel like they are really inside an environment.

Greg: We provide those components that complete the whole sensory connection between the visual and auditory. When that is completed the experience becomes believable, real, and engaging.

Spatial audio is achieved by replicating the way our ears hear sound. Imagine someone trying to record a stereoscopic static video. They would set up two camera lenses in close proximity to our eyes so that they can see something through the lenses the same way that we do. We take that same approach to recording sound. It’s really important to capture sound with a pair of microphones from a perspective that replicates how our ears hear that sound.

Hear360_Team Photo
Matt Marrin, Greg Morgenstein and Saul Laufer, Hear360

3. What are the other ways of replicating 3D sound?

Greg: We set out to create something that captures audio with a very natural sounding result. The outcome is that you feel like you’re really there. Our current approach represents how the human ears work together in capturing scenarios. Another way of capturing spatial audio or creating a spatial audio experience is the synthesis of it. This is where you capture a sound signal and create an experience in post-based on that sound field, similar to ambisonic capture or ambisonic deliverables. It’s another method that a lot of companies in this industry have adopted. We’ve created a lot of those tools and built a ton of prototypes of those tools along the way, but we think that our first products should be really natural sounding, so we decided to start by pushing the binaural experience. We believe binaural has a very important place in VR and immersive content.


4. Explain the difference between binaural and ambisonic.

Matt: We can separate spatial audio capture into two main categories: binaural capture and delivery, versus ambisonic capture and delivery. One difference between the two formats is that ambisonic recordings require synthesis to be spatialized – the audio needs to be converted to B-Format and then fed through HRTF filters in order for a listener to hear the spatial component. The problem here is that you can’t control how a platform’s HRTF filters sound – meaning that you lose control of how your recordings sound at the delivery stage. There are other problems with ambisonic recording including phase and frequency response issues that are inherent in the de-code process. The reason why we love our omni-binaural recording solution is because it is not synthesized. All the spatial audio cues are baked into the original recording in high resolution. Coming from a background of over 20 years of recording, mixing, playing, and producing music, as an audio engineer you’re drawn to capturing something, hearing it the way you captured it and understanding how it sounds. We are obsessed with resolution and how we record things. We’re fascinated with our playback monitors, and when we mix we listen in all sorts of different environments including our cars, homes, laptops, etc. We do that for one reason – because we want to make sure our mix sounds great wherever it’s played and we want to have a firm understanding of that.


5. What was the motivation or inspiration for developing a spatial microphone? Did it have to do with creative challenges or developments in 360° or VR?

Greg: We wanted to have an end-to-end solution where we control the quality of it from capture to delivery. We felt like there was a place for an omni-binaural capture because we wanted to have something that sounded very natural and sounded like the space and the performance that was played in the initial environment. We liked that you could capture sound that way, without the need to change it or affect it if you didn’t want to. So we sought to develop a microphone that could accomplish that. From there, we had to create all of the tools to edit the captured content, mix it with non-spatial audio, or things that you wanted to become spatial, and finally deliver it encoded for your preferred content platform.

Matt: Before the 360° video and VR world evolved, another big motivation for us was how cool binaural audio is. We had been working in a space where static binaural was something that had been around for years, but not a lot of people had experienced it. When you hear it, it really changes the way you think about experiencing recorded sound. When you realize you can experience recorded sound in three dimensions, it’s kind of mind-blowing. Now there’s an opportunity where you can interact and move around in a sound environment, making the experience even more believable. When VR and 360° video came into play, immersive audio was a neglected area. At the time, it was more of an idea than anything. For us, it was all about the excitement of delivering something new and sharing that excitement with other people.

IMG_6878.jpg
Vic Mensa, AT&T and Direct TV

6. Technically, how does the 8ball microphone work when capturing sound? Is it easy to use when it comes to production? Who should buy the 8ball microphone — who is your ideal customer?

Greg: The technical aspect of how it works is that it basically mimics four human heads facing in four opposing directions – front, left, rear and right. In every human head, we have two ears, a pinna and we have acoustical shadowing based off of the curvature of the head. Those are the basic aspects of a capture. When your ears are separate, you have a time delay between their capture and when you introduce a pinna you have filtering or tone colourization based on directionality towards a pinna on the front or the rear. When you combine all these things – including the curvature of the sphere, acoustic shadowing, time delay level differences based on the directionality of the oncoming signal and omni-directional capture – you get a spatial capture. When you do it in every direction you can head-track that capture. Since it’s based off the human head in a standard binaural capture, you get a very natural sound.

When it comes to production, we designed 8ball to be point and shoot. There are eight channels that you’re capturing. You’ll need a multi-channel audio recorder – Sound Devices, Zoom or any other hardware device that records eight channels of audio will work – and then you just point and shoot. We’ve designed all of our post tools to make it easy to manipulate it if you’ve captured something wrong. In that case, you can recalibrate with our calibration tools. If you dont want to do anything in post, you can just multiplexer (MUX) it with video with our encode tool and deliver to whatever platform you see fit. You can also convert it to an ambisonic deliverable for Facebook and YouTube.

Ideally, the people who purchase 8ball use it for a lot of music related capturing, reporting, documentary, cinematic, or live streaming because of how natural it sounds. It’s simple to use and streamlined, so it doesn’t require a lot of encoding or manipulation. Another good example is capturing natural background environments such as a park, jungle or street for VR gaming experiences and 360° experiences. It’s eerie how real it sounds and it’s great to put that into a virtual experience.

8Ball.JPG
HEAR360 8ball

7. Do you need special software to decode captured audio from an 8ball microphone? Do your tools integrate with traditional audio post-production tools?

Greg: Yes, it’s all very easy to use. You don’t have to learn anything new to mix and integrate with traditional audio post-production tools. We built our workflow into all the standard digital audio workstation (DAW) systems. The first one we supported was Pro Tools because that’s basically the industry standard for mixing and editing. A lot of people use Reaper because it’s very flexible, so we created virtual studio technology (VST) plugins. These tools can be utilized in other DAW systems as well. We are working on creating some limited tools for video editors like Premiere and Final Cut Pro, where they can take an 8ball capture and put it into these systems, calibrate to capture, edit, encode, and deliver it.


8. What about playback of spatial audio, why not focus on this as well? Wouldn’t the ultimate experience of immersive sound be through a high-end amplifier or expensive speakers? What is the best way to hear spatial audio?

Greg: Most people experience VR content in a head-mounted display (HMD) and every HMD requires a headphone component to it. So that’s the first thing we wanted to support. When you’re wearing headphones, you’re separating your left ear from your right ear. During a spatial audio playback, it’s easier to separate them because you don’t have crosstalk. For over speakers, we’ve created tools that include crosstalk cancellation – so when you play information from the left speaker your right ear won’t pick it up, and vice versa. You can steer that information and work with delay to manipulate things in order to mimic a headphone experience and create a compelling spatial audio experience. It’s something that we’ve been working on since we started the company, but we don’t yet see a large use of it in the industry. Most people are wearing headphones, so we are focusing on headphones first. However, we have created tools for beamforming, crosstalk cancellation, and tools to deploy a spatial audio experience over speakers for future products.

8ball_Madrid_Open_2017.jpg
The 8ball in action: capturing 360 audio at the Madrid Open

9. Can you live-stream spatial audio?

Matt: Yes, but only if you build very specific tools, which we have. In order to do this, we built our own streaming server to live-stream multi-channel audio. We set it up so that our web player could render live 4K video with head-trackable spatial audio over LTE.

There are a few people who are attempting to live-stream with spatial audio, however, it’s a pretty difficult process to pull off. It’s not as simple as what we’ve created – our solution is more point and shoot. Our solution also gives you the ability to scale up into an advanced live broadcast and the ability to MUX non-spatial content with spatial captures. We’ve created all the tools to allow engineers to do this in real-time, with 4K video and head-tracking over our servers.


10. What creative possibilities do you see in the future of spatial audio?

Matt: A wider range of flexibility will happen over mobile and the delivery format will open up so you can experience VR in your home, or watch it on television more easily with fully interactive sound. Also, live experiences with spatial audio like car racing, horse racing, and other sporting events will become more readily available and expected by consumers who will demand increasing levels of interactivity with the content they consume.

Spatial audio is very important for live events as it completes the transformation of the user feeling like they are truly inside the experience. The difference between headphones and speakers will start to blur and you’ll be able to experience spatial qualities on any playback system. Augmented Reality (AR), increased computing power, personalized HRTFs, and object-based spatial audio solutions will allow us to be truly interactive with sound through normal everyday activities.


 

Measuring Immersive Engagement

An interview with Benjamin Durham, Thrillbox


1. Why is it important to measure consumption in VR/AR?

Currently, a major concern when producing immersive content is that you can’t get a return on the investment. We think it’s important to give virtual reality (VR) and augmented reality (AR) a measure of consumer engagement so we can create a sustainable opportunity for immersive media, and it can be another tool in the marketer’s tool belt.

While Thrillbox technology is being developed for multiple enterprise verticals, our primary reason for focusing on consumer engagement is that we believe that the future of how we value things is predicated on an economy of attention. Attention is a currency in and of itself. It’s based upon the time that a person decides to spend and where they decide to spend it. What we learn from that time spent can improve a person’s experiences on a day-to-day basis.

ross-benjamin-tech
Ross Gillard (GraysonX) and Benjamin Durham (Thrillbox)

2. Can’t you just apply established practices of consumption analytics to new immersive mediums like VR and AR?

Currently, people are doing that and it’s part of the reason why we’re not able to create sustainable business opportunities from immersive media creative. There’s a disconnect between the way that we measure and what the value of the content can be.

The inherent value of immersive content is that, for the first time, the technology has enabled us to connect with a user’s ability to look at, or look away from something and be aware of it.

When we look at the history of ad impressions, brands conducted focus groups to measure the audience. Brands couldn’t guarantee that those groups were looking at the content or not. However, they could guarantee that it’s being presented to them. Now there’s a different value, and it’s in knowing what consumers are actually looking at.


3. What are heat-maps and how is Thrillbox innovating beyond these?

Heat-maps are a method of delivering a visualization of a user’s engagement of where a user is looking inside of the immersive experience. Heat-maps aren’t as accurate as they could be. We describe heat-maps as heuristic – it requires human subjective interpretation, which isn’t quantifiable.

What we’ve discovered is that brands and clients don’t want to just know where people are looking, they want to know what they are looking at.

The objective of this next wave of the internet is personalization. In order for internet services to make decisions quickly, technology has to be put in place that enables automation. Our hotspot tools allow you to select objects that are within a video, and then quantify the potential of how long they could have been seen, how long they were in your field of view, and how long they were in the center of your attention. Having this information in a quantifiable format enables existing digital advertising technologies to take advantage of their existing platform’s software to target and personalize the delivery of content.


4. What are the nuances of 360° viewing patterns? How do they differ from traditional media?

One important thing we’ve learned is that, currently, people don’t stay in 360° videos for very long. In the interim, you need to ensure that pieces of 360˚ content are three minutes or less, because everyone is competing for attention. We believe this to be the case for two reasons. First, is the content compelling? Was its production profitable? Second is due to consumer awareness. The majority of consumers exposed to the medium has been limited to platforms like Facebook and Youtube, which have user interfaces (UI) that are designed to optimize the delivery of 2D videos. For example, it is very difficult to distinguish a 360° video from a traditional video on Facebook’s feed. Scrolling through the feed, which consumers have become accustomed to, does not give you the ability to discover 360° videos because they are presented the same was as traditional video. I am confident this will change over time, as more consumers are exposed to 360° videos capabilities more organically, and as more wireless headsets enter the consumer market.

It’s difficult for people to deviate from traditional media, so what we’ve developed is a way for other mobile applications to integrate 360° video technology into their application. I would also suggest that when developing the user experience, trying to take into account that the consumer lacks the same awareness as the team developing the product. So try your best create a UI that naturally distinguishes 360° video from traditional video, even if your platform is responsible for distributing both.


5. Explain how you actually track what a person is doing in VR or AR? Is it based on head-tracking?

Yes, for 360° video, it is based on head-tracking. We collect information from the movement of the display device. For example, if you have a smartphone inside a Google Cardboard, we collect information on your head movements. It is a head-tracking based system that measures potential impressions (item in frame), exposure (time in frame), and focus (time centered). We’re collecting the orientation data for every frame of the video.

thrillbox-shoot-russell
Russell Vargo, Thrillbox

6. How can you integrate this with social media?

360° video can be used for promotional purposes, as well as for publishing content. Multi-channel marketing is a great way to discover audiences on social media who take interest in 360° photos and videos. When using our tools for promotional purposes, content can be posted on social platforms with a call-to-action outside of the ecosystem. From there, you can point them to a controlled environment where you have more Own and Operative (O&O) control.

Ultimately, the lesson learned by many entities was that there was more value owning and operating a distribution platform. This resulted in having direct control over content distribution and monetization of their audiences, versus trusting 3rd parties to host and monetize on their behalf. What’s important about Thrillbox’s Workbench technology is that it’s designed to give clients distributive control. This includes promotional and publishing control over where videos are embedded, how audiences are activated, and having access to engagement data that is more in-depth than social media platforms’ aggregated analytics offering.


7. Now that you know what a person is looking at, and for how long, how do you monetize in VR and AR?

There are two ways to look at monetization. The first way is how can you save money, and the second is how you can make money. Our Workbench tool provides a way for people to save money by having distributive control over their content, as well as access to analytics that provide a higher level of data fidelity. If one were to attempt to bring together multiple 3rd parties and develop a “Frankensteined-platform”, it would be expensive and timely. The Thrillbox Workbench was developed with a Service-Oriented Architecture (SOA), which was designed to enable the integration of loosely coupled microservices. Our platform is capable of providing all the features of cobbling together multiple 3rd parties, without the same costs, vendor management or time concerns.

In addition, there is inherent value in having O&O control. Currently, produced immersive content is primarily distributed on social media platforms. When a client uploads their content to one of those platforms, they surrender control over its distribution, as well as decisions over how monetization of audience engagement is to occur. Distributive control is vital to determining how to make money in immersive media. Our tools give our clients that power and information, which can be used many ways to activate a consumer. Thrillbox collects behavioural data and uses those data sets to generate actionable business intelligence and performance metrics that enterprises are accustomed to. If we know what types of products were focused on in a 360° experience and whose production was subsidized by product placement, that information can be used for targeted ad campaigns. The brands responsible for those campaigns will pay higher rates for access to those audience members devices to publish targeted advertisements, particularly to those who are aware or are interested in their products. The result of this targeting is a higher Cost per Impression (CPM).


8. How is this a long-term solution for brands?

Thrillbox’s technology is a long-term solution for brands and enterprises because of its architecture, which enables distributive control, in addition to streaming big data management and immersive behavioral analytics. With the evolution of OTT (Over the Top), streaming and the delivery of digital content has allowed brands to build their own audiences. Brands want to have direct distributive control and direct access to their audiences. They don’t want to depend solely on third-party data from social media channels. Now, anyone has the ability to make their own content, distribute it, and make more money with less revenue share. Instead of rushing to benefit from the first to market advantage, we focused on understanding the need of enterprises by working with our lighthouse clients to build a solution that operates well with existing or legacy infrastructure.


9. What are some ways that immersive behavioural data sets can be integrated to work with different ecosystems?

Data enrichment and targeted marketing. For example, the ability for us to understand a person’s interests based on what they focus on within immersive content and then note where there are opportunities to satiate that interest when you’re going to and from different places. If an agency were to create a 360° video for an NBA team and use our tools to tag each of the players within the experience, we can determine which members of the team the audience member directed their attention to the most and least. Knowing which players each audience member focused on would allow teams to send targeted advertisements for a specific players’ jersey to each end user based upon which player they paid attention to the most. Perhaps these notifications occur during halftime for those that are in attendance of a game in the arena, because the jersey is an exclusive and is only sold in specific locations within the stadium. Or maybe the end user is presented a discount to incentivize an online sell because they aren’t at the game.

Benjamin.JPG
Benjamin Durham, Thrillbox

10. What does the future hold for VR, AR and immersive data and analytics? Where’s it all going?

One of the greatest challenges right now is how to create the proper standards for enterprises and brands to access valuable and actionable data, while simultaneously preserving the integrity of a user’s privacy. There needs to be a level of integrity out of the gate when handling this information. We have worked really hard on developing a proprietary method for how we define the user and collect and associate information in a way that doesn’t have any personally identifiable information.  

In the future, Thrillbox will continue to build out the features and microservices offered on the Workbench, as well as expand the application of our big streaming data management platform to include VR and AR. While we look forward to the future, we do feel that focus needs to be on exercising humility and working diligently in the present. As a modality, immersive has yet to move beyond the trough of disillusionment to mass adoption. So there is much work to be done.  2018 is definitely the year of showing and proving. 

Overall, the future of immersive media looks very bright. Sorry for the pun, but advancements in image sensors, computing power, artificial intelligence, energy consumption, and display technology are occurring at an exponential rate. I am most intrigued by how these technological advancements will influence the method in which a story is told and how it is presented.

Sources: https://artillry.co/2017/11/30/measuring-ar-vr-engagement-a-conversation-with-thrillbox/

 


 

Highlights from CES

An interview with Ross Gillard, Head of Immersive Content Operations


1. What brands stood out at CES 2018? Why?

The Insta360 Pro Titan. This 10K resolution camera looks to be the camera system a lot of us in the industry have been waiting for. Details are still tight lipped, but we know that it boasts two kilometres wireless range, micro four third sensors and has 75-degree overlap on the lens. In general, the price point of Insta360’s products have been in line with where the industry’s sweet spot is right now. It’s good to see a roadmap for the quality of product offerings is increasing quickly. Also, it was exciting to see companies likes Intel promoting the coming age of 5G through virtual reality (VR). It’s a promising sign that all of the big companies are beginning to treat VR like the future form of content. For me, it was less about brands and more about the widespread adoption of cutting-edge technologies like VR, augmented reality (AR), 5G and connected smart devices.

Insta360_Titan
Insta360 Titan 10K Camera

2. Were there any big reveals?

The HTC Vive Pro was something I was anticipating, but being able to get a hands-on demonstration of it was something I wasn’t expecting for a few more months yet. Getting the chance to put it on, walk around and witness the quality jump with my own eyes – that felt like a big reveal to me. Seeing how much interest there is from big companies around 5G integration into everyday devices – from the future of smart cities right down to consumer products – this all seems to be happening surprisingly fast, which is very exciting.

HTC_Vive_Blue
The HTC Vive Pro

3. What were the coolest gadgets you saw?

The Sports Pavilion was implementing a lot of innovation from other industries, applying technology to things that you might not have imagined coming for a while yet. For example, there were GPS and accelerometers the size of my thumbnail embedded into equipment like tennis rackets, so athletes could measure when things like when their hands are rotating and what the velocity of the impact was. It’s exciting to imagine this future for athletics and sport, and how this can inform training and allow someone to examine their own swing after the fact in ways that only athletes competing at a professional level might have had access to previously.

I played a few rounds of mixed reality (MR) golf where I actually felt like I was on the course. Having to deal with my horrible hook in various grass lengths… I’ve seen golf simulators before but I had never really seen them as an alternative to anything other than the driving range. This felt pretty close to the full sport, less the wind and the sunburn.

It’s interesting to see the cross-pollination of industries and their respective developments, how this can expedite technology’s ability to enhance our day-to-day lives. For example. there’s a computer vision based device for people with vision loss. The device clips onto your glasses and when you point at an object, the speaker attached to the glasses describes what you’re looking at. It can read a menu out loud and recognize basic objects, etc. Computer vision and artificial intelligence (AI) are going to be incredibly empowering for people that aren’t able to experience the world in the same way as you and I can.


4. What were some trends?

5G was the one global theme of the trade show. The throughput of data that we’re going to be able to exchange is going to create a vehicle for us to do so many new things. 5G will bring a lot of power to over-the-top methods of content distribution, and it will be the backbone for communication between massive networks of connected devices, such as vehicles and homes. To me, it represents a step towards efficiency. I’ve seen that in my own life with products like Nest, SonosPhilips Hue and Waze. It’s exciting to imagine how a whole city of these types of apps could be connected and communicating with each other to increase efficiency across the board.

Intel
Intel’s immersive LED tunnel

5. What were the best new products?

The Samsung QLED television won “Best-In-Show”. The breadth of television and display technology was pretty impressive. I’m happy to see 3D televisions that don’t require glasses to experience 3D. Coming from a person who is on film sets often, seeing quality-made confidence monitors coming down in pricing is exciting. The display technology in general has been rapidly advancing since its invention and unlike a lot of other industries, it never seems to plateau.


6. There were a lot of robots at this year’s conference, is this a growing tech trend?

It’s definitely a growing tech trend, however I haven’t seen a lot of application for how they’re going to change people’s lives for the better, at least not in the home. Right now they’re perceived as companions that can put a smile on your face or help clean your house, like the Roomba. I think we’re a few years out from robots truly aiding and bettering the quality of human life.

I found it fascinating to watch a ping-pong match between an individual and a robot. If children grow up with access to technology like that, I can only imagine the level of dexterity, precision and hand-eye motor skills that can be developed from an earlier age. There has been a lot of stigma around technology and children being exposed to it at an early age. I think that as we get closer to technology mimicking the real world, the relationship between us and machines will change. Within sports, hopefully we find their place as our own personalized assistants, a sparring partner, a way to focus training. Beyond athleticism, if we can develop a machine to execute the nuanced motor skills of sport, then we’re even closer towards precise actions in things like surgery. Things that can drastically affect the quality of human life and reduce error.

Little_Robot


7. What are some challenges you have faced in building and creating immersive music experiences artificially?

For me, it’s incredible to see see which problems start-up companies have tried to solve with a product, perhaps it was only to solve a problem for themselves or for everyone.

For whatever reason, they may not understand where some of these similar pain points exist in other industries. To be able to walk these conference floors and see the problems that other people have, the solutions that they have created, you start to understand where the commonality in all of it is. Sometimes you see solutions that people have created for their industry that are entirely applicable to yours.

CES is very important because it keeps your finger on the pulse of technology and up to date on the problems people are trying to solve via product. I truly believe that a lot of technology is created to solve a problem.

If you’re in your own world and you’re only looking at your own problems, then you’re probably overlooking solutions that other people have come up with that could be a part of your solution. Not everything needs to be built internally. More often than not, I think it’s best to embrace the hive.


8. What was the biggest difference between last year’s conference and this year’s conference?

VR, AR and MR were prevalent in so many booths. Large companies are embracing the technology and bringing it into their product ecosystems. At the beginning of last year, it felt as if VR needed to have a new platform in order for it to exist. Today if feels like we are close to a point where it coexist within the same platform. Imagining apps like Netflix having all types of content within one app. I’m not saying that’s happening yet with Netflix specifically, but seeing how serious brands and services are treating VR and AR is a really promising sign that it is here to stay and it’s only going to continue to develop and improve.

CES_VR.jpg


9. What learnings did you take away from CES?

We’re all looking for tangible results from attending a conference like this. “How does this increase my revenue?” “How does this networking affect us?” What these conferences are really good at is bringing innovators into one place. You end up having conversations and start talking about your pain points and then you realize that they have similar ones or that they’ve solve the same problems. If you strip that part away from CES and you limited it to the vendor booths and the tangible products, then you’re overlooking the power of all of these great minds coming together and having dialogue with one another. I think a lot of time we go back to our labs and production facilities and we don’t talk to each other until the next conference. Having everyone together in a social atmosphere creates an empowering community where you’re learning a lot more from those conversations then from reading a pamphlet or a website.

In addition, seeing where the consumer products are going is really affirming the path that we’re on when it comes to developing. Feeling that confidence that we’ve got solutions that the world wants and solving problems that other people haven’t had yet to solve is very exciting and empowering. It’s great to see that we’re not just trying to solve problems that we have internally, these are large-scale solutions for a lot of people.


10. Where are things heading in immersive technology?

There’s a lot of interest in connectivity. With 5G and smart cities being a theme, the idea that all of this technology is no longer in isolation and that it’s now communicating with each other brings efficiency. Immersive technology is expanding into many industries. It’s not being treated as an isolated medium, it’s pervasive into every type of industry and every walk of life.

People are craving more from experiences and technology. I think we have a voracious appetite to use technology to free up more  of our time in our day. There is a very reasonable argument to be made about how AI is potentially dangerous, but ultimately, I believe it’s going to serve people individually rather than disrupt in the world negatively. It’s going to be used to empower to take more on, focus on the things that the brain excels at instead of redundancy. We need to spend more time tapping into our individuality rather than doing mundane, everyday tasks.

There’s been a lot of talk in previous years about presence and empathy. If you look at the political landscape right now, bringing people together seems to be of the utmost importance. Being able to put yourself in other people’s shoes, or put yourself in shoes that you’re not necessarily comfortable in, is really powerful. I’m equal parts terrified and awe-struck by the ocean. I’m never going to go to the bottom of the ocean, but I might be able to do that in VR and know what it feels like. Maybe the next time I swim in the ocean, I’ll be better informed and a little less afraid…


 

The Future of Sound

An interview with Dave Sorbara, Partner and Chief Creative Officer

 


1. Over the past 17 years, Grayson Matthews has specialized in music and sound for film, television and radio. Recently you started exploring new mediums, primarily around the creation of immersive content. What sparked your interest in immersive media and spatial audio?

Back in the late 1990s, we were in recording studios with giant consoles and tape machines that were very expensive – it was the only place where you could do professional recordings. Then the computer came along, and I remember being really fascinated with what you can do with a computer versus an analog console. I was interested in the ability of computers and technology to do a lot of things that analog technology in recording studios had been doing. That was a thread in our existence that always led us to the next transition in terms of the evolution of our company; it was really about technology.

As we progressed, I discovered spatial audio and virtual reality (VR) and what was happening behind the scenes with a lot of tech-heavy audio people, who are really more mathematicians than audio people. It’s a whole new way to think about sound and requires a new set of technologies. Suddenly, the possibilities of what you can do with sound began to open up again and I started getting the same feelings in my bones about sound. It was all about solving new problems and hearing things you never heard before. I felt reinvigorated in thinking that this was another wave of interest that can drive us forward over the next decade or so.


2. How is immersive content changing the way we tell stories and experience audio?

Immersive content is all about elevating the engagement with the individual audience through interactive stories. It’s that idea that you want to pull the audience into the story more so than having them sit there and watch the narrative. For example, my four-year-old son loves watching videos on YouTube, however his biggest obsession isn’t watching the video itself, it’s being able to change very quickly from video to video – he won’t watch the whole thing. At one point, I showed him a 360° video and once he realized you can pan the camera around and engage with a piece of content, static video in comparison was boring.

For younger people who have grown up in the age of the iPhone and the Internet, being engaged with content more so than being a passive player is really important. A friend of mine’s son, who is 12 years old and loves hockey, would never sit and watch hockey on television. He would rather play hockey in a hundred different forms such as hockey video games, street hockey, fantasy league hockey, etc. You can engage with the love of that sport, instead of passively watching a Maple Leafs game on TV.


3. What are some of the best immersive experiences you can think of to date so far? Who’s doing this well?

I think the best example of immersive content today is mixed reality experiences where you’re combining physical objects in a virtual free roam space. Toronto companies such as The Void, Globacore and one of our partners Seed Interactive do this very well. It is remarkable how you can transport your mind and body to a different planet via these different technologies. To feel like you’ve been transported instantly and have your brain fully believe it, is pretty amazing.

Mixed reality (MR) experiences are definitely cutting-edge – there’s not a lot of people doing them. We are in very early days, but there are some great VR games out there. One that I love playing is SUPERHOT, which is an amazing example of such a simple concept that only works inside of an immersive experience. Another huge one for me is music, because it is completely different in the context of immersive content. What I believe is the most important part about music these days is music performance. As computers advance in music production, live music will be crucial to its longevity in existence and in popularity. Capturing a music performance using 3D capture technology for audio and video and making someone feel like they are actually there is one of the best immersive experiences. There’s such a different emotional connection you make to the music in a performance inside of a headset or in a connected immersive piece of content.


4. What VR/AR gear would you recommend someone who is new to the immersive world?

At this moment in time, I would hesitate to buy anything VR related that connects to a large computer. Vive’s next iteration will be coming out this year and a lot of the gear is moving away from being tethered to computers. The next generation consists of stand-alone, no computer required, high frame-rate and high-resolution headsets that will be able to deliver what we’re delivering today on an Oculus connected to a powerful computer. The only difference is that there is ease of portability and use and it requires less setup.

In terms of augmented reality (AR), if you haven’t experienced what you’re capable of doing now with your AR tool kit on the iPhone, that’s the first place to dig into because there’s so much showing up there. Check out the IKEA Place AR app and you’ll get a feel for where this medium can go. It’s a great way to get people familiar with the ideas of what the possibilities are. Also, AR headsets will start making their way into the marketplace. Microsoft is coming out with a new iteration of their HoloLens that’ll be high-resolution. And the super secretive company, Magic Leap has finally shown off their first MR headset.


5. Tell us about your experience designing 360° soundscapes for SESQUI’S spatial dome?

I have to say that designing 360° soundscapes is interesting because you take all the learning you’ve done for stereo and surround and you think you can apply it, but then you realize it’s totally irrelevant and you have to start again from the beginning. The amazing thing about the process of starting fresh is going back to that creative instinctive of just using your ears, your feeling and your heart and ask yourself, “Does this sound cool? Is this interesting?” Surprisingly, you have this infinite number of creative possibilities in a 360° space. You’re creating environments and coming up with ideas on the spot and you want to experiment with all of it because you have no idea whether they’re going to work or whether it’s right or wrong. What ends up happening is you go back to that instinctive child in you who’s just playing to find something. In a way, that’s what a lot of the SESQUI project was really about.

The most eye-opening thing about creating sound for these type of experiences is that you have to throw convention out of the window and start from scratch.

Starting from scratch is a beautiful thing because it’s freeing and scary. You thought you knew something and suddenly, now you don’t. It was an amazing experience for that.


6. What can we expect to see on Grayson’s mobile application?

What we’re trying to do is push the possibilities of music and sound. This is an application where we can show people where it can go. We’re limited a lot of the time by the mediums with which we’re sharing content, so we’ve developed an app that will demonstrate the most cutting-edge version of what we’re trying to do. We aim to find a natural place where sound can go that resonates the way music does emotionally.


7. What are some challenges you have faced in building and creating immersive music experiences artificially?

The biggest challenge is creating the feeling of what were trying to achieve for the end users. It’s a technical challenge because a lot of the 3D spatial audio that we hear today is a synthesized mathematical equation and it makes sense and it works, but it’s missing that natural sound that connects with the individual. Although it localizes sound, it doesn’t sound real.

For example, HEAR 360’s 8ball microphone tries to capture that feeling versus an ambisonic microphone which captures a mathematical version of what it’s hearing. I think the next frontier is getting people to emotionally connect to sound like they do to a piece of music. We’ve had 75 years of working with technology and understanding it in order to get to a place to make people feel something when they hear it.


8. Why is it crucial to run tests before integrating 360° audio into live-streams?

The tests are important because there’s no precedent for what you’re going to hear and what it’s going to sound like. Since everything is new, almost every project is an R&D experiment where you have to solve more problems. The technology we’re using is new and always changing so every time you deploy new gear you have to understand what you’re going to get out of that gear. Testing and prototyping is a huge part of the process because there’s so many variables. You need to do the R&D and testing to ensure you are going to get something usable when you are shooting.


9. What are your greatest findings in experimenting with immersive technologies?

The scary thing that I’ve learned is immersive technologies can transport human beings pretty easily. Realising how easy it is to take someone’s brain and convince them that they are doing something else, are somewhere else or feeling something, is pretty powerful. It has the ability to educate exponentially quicker than other methods of technology and it can make things feel more real than anything else. Recognizing how powerful that is, you know it’s here to stay, it just needs to be taken seriously.


10. Where do you think it’s all going?

We live in this experience economy, where experiences are the most important things to humans. The ability for new technologies to make people experience things they typically wouldn’t have access to or have not felt before is becoming more evident. Education is going to change drastically. For example, you can take a nine month technical training course and shrink it down to two weeks because of technology like this. Entertainment is going to play a large role in the immersive world. In the near future, you’ll be able to sit courtside and watch a basketball game while looking at six degrees of freedom cameras, which will give you the ability to walk and move through the space. AR has given us the ability to overlay information and content and integrate it into real space. For example, AR tape measures, which can be downloaded in the App store, are a game changer. Businesses that mass produce tape measures are now competing with AR technology because an app gives everyone the convenience of virtually carrying around a tape measure in their pocket. The AR tape measure apps are more powerful, efficient and have more features than a physical tape measure. These types of advancements are constantly going to happen in the near future.