What’s trending at NAB2018?

Niclas Hallgren, Technical Operations Manager, Institutionen för Kultur och Media, Arcada UAS
This year 244 first-time exhibitors joined a wide variety of expo veterans on the nearly one million net square feet of space that make up the NAB Show exhibit floor in Las Vegas.
“Our new and returning exhibitors are the foundation of NAB Show,” said NAB Executive Vice President of Conventions and Business Operations Chris Brown in his opening speech. “Attendees come from all around the world to see firsthand these companies’ cutting-edge technologies and practical business solutions that continue to innovate how content is created, distributed and consumed.”
As Technical Operations Manager, I was fortunate to represent Arcada at this gathering to hear all about some of the latest trends and innovations in our field. Here are a few ‘teasers’.
Broadcast Industry Future
Probably one of the biggest things happening is the transition to IP based workflows. The 25-year-old SDI cable will be replaced by the Ethernet cable we used to plug into the computer. The standards are now in place, and manufacturers are starting to deploy products utilizing NewTek’s royalty-free proprietary NDI system as well as SMPTE ST2110.
SMPTE ST2110 is likely to be the one preferred by the broadcast industry as it offers uncompressed video with higher quality.
Panels and sessions
This year I had the possibility to attend three sessions. Two of these had several different presenters during the same session.
- The New TV – Mobile, Immersive, Smart, Connected, Everywhere!
Presented by Rik Dunphy Principal Visual Cloud Media Architect at Intel.
Dunphy talked about the tremendous growth in New Media, for example the game streaming service Twitch has an audience that approaches 1M viewers/event. That is the same amount of viewers as “normal” tv-programs. Gaming traffic is predicted to grow 10x from 2016 to 2021. The trend is that you do not play your games on the console at home anymore. They are hosted on cloud servers and videos of the games are streamed home. This enables a lower latency (crucial in high action games) and higher quality. Ultra HD video streams will be 20.7% of IP video traffic by 2021. Intel is also working on virtual reality technology that will allow individuals to feel like they are with their friends at a live sporting event, when they are actually in a hotel room in another city:
“It is a long way off, but that is the vision.” Rik Dunphy
To able all this we will need 5G mobile networks and fiber to the home.
- The Highly Personalized Future of Television
Presented by Anthony Berkley, Vice President for New Business Development and Product Management at Nokia.
“The industry is on the cusp of moving away from a fixed device with a screen and really enabling anything to be a screen. There is a huge amount of demand for making anything a screen.” Berkley said in his presentation.
The Nokia approach is ‘Any Surface, Any Show and Any One’: Any surface, because they want to be able to “draw” a video on any surface around you; Any show to be able to watch whatever you want whenever you want; and Any One to be able to be in contact with your friends during all this. Here is a video presenting their vision: https://www.youtube.com/watch?v=rOZUKpBwsF8 (External link)
- Bridging Gaming and Broadcast Technology in Virtual Sets
Presented by Éric Minoli Vice-President & CTO at Groupe Média TFO.
Groupe Média TFO have bridged gaming technology and traditional broadcast virtual set technology to create a high productivity, low-cost approach to virtual studios: the Laboratoire d’Univers Virtuels (LUV). They use the power of the Unreal gaming engine from Epic Games in an innovative development by Zero Density to create an infinite variety of virtual worlds. For the first time, this unique space produces huge amounts of children’s content daily. No need for post-production, everything happens in real-time. It gives them the possibility to use sophisticated effects like fire, water and fog, yet remaining within a tight, public service budget. Watch this video to learn more: https://www.youtube.com/watch?v=EHV-TGuk01c&feature=youtu.be (External link)
- How Advances in AI, Machine Learning, & Neural Networks Will Change Content Creation. Presented by Tom Ohanian President at TAO Associates.
Ohanian talked about research done in real-time software speech recognition that now is on par (5.1% error rate) to the error rate of human transcriptionists. Software that can automatically create personalized viewer highlights and promos from single or multiple cameras. He presented content creation in three phases: Decreasing Human Workload; Content Insight and Workflow Steering; and finally Automatic Content Production. There were many videos in his presentation but this one shows how far the technology has come. The result is still not perfect but scary to know that this was made by a computer, and not edited by a human: http://grail.cs.washington.edu/projects/AudioToObama/
- How AI will Take Productivity in the Broadcast Industry to the Next Level?
Presented by Dr. Johan Vounckx Senior Vice-president Innovation and Technology at EVS Broadcast Equipment.
EVS is trying to solve the problem where more and more work is needed for productions. This means that more personalized and enriched content is needed on tighter budgets. Neural networks and machine learning could solve the most difficult challenges. Basically it works like this, input training data with initial parameters to the software, then compare output with desired data, update the parameters and do it again until output matches desired output, go to the next input data. At the moment, EVS software can direct live sport events to enable a personalized stream for viewers, for example one that follows a specific player, or a team or the ball (in football) based on the content that the camera delivers. According to EVS, we will see an explosion in the application of AI in live content. They emphasis that AI is there to help people not replace them (but it is already clear that it will replace people in the future and that certainly elicited some debate in the audience).
- AI-Driven Smart Production.
Presented by Dr. Yuko Yamanouchi at Science and Technology Research Lab, NHK (Japan Broadcasting Corporation).
NHK is working on many different AI technologies to be able to create content rapidly and effectively. For news gathering and editing they use big data analysis (extract useful information and generating news manuscripts), image analysis (generating metadata for searching materials) and speech recognition (generating transcripts of interviews). This can then be used for accessible content conversion, for example automated audio descriptions for the visually impaired and computer graphics (animation) for sign language generation for the hearing impaired. The presentation showed how NHK is using their Social Media Analysis System to categorize Twitter tweets into 24 types such as ‘fire’ and ‘accident’ and also an alert function for producers to let them know when “high enough“ hits are found for certain themes and topics.
- How AI is Powering the Intelligent Future of Video Editing.
Presented by David Kulczar, Senior Offering Manager at IBM Watson Media.
IBM’s AI, called ‘Watson’, combines audio analysis, visual analysis and external analysis when analyzing content. For audio analysis, Watson uses speech recognition with emotion detection in voice and also crowd responses. When doing visual analysis, Watson can detect and identify visual elements of objects in images and videos, but also detect specific movements or actions. Besides that, it can categorize and identify faces, brands etc. The textual analysis can analyze text for highly relevant properties to be able to detect categories, keywords, object, tone, connotation and emotion. External analysis involves audience and environmental analysis using AI and Big Data to better understand and predict patterns and trends. Watson has recently been used in the Masters Tournament (Golf) for creation of highlight clips in near real-time that can be shared for social media and other high value outlets. Notably, two years ago, Watson created a trailer for the 20th Century Fox AI horror/thriller Morgan, were it did the editing in 24 hours compared to 10-30 days for a human (a human was needed to finish up and remove things that was too revealing for the plot): http://www.wired.co.uk/article/ibm-watson-ai-film-trailer (External link).
To conclude I must admit that I did not know that AI in our field was so “good” already. It will get much better at a faster and faster pace, and it will definitely change the way we teach and produce movies in the very near future.
A more indepth presentation will be offered at a forthcoming seminar.
Niclas Hallgren
23.04.2018