Facebook’s new tech at F8 2017 | Engineering Blog | Facebook Code

Day two of F8 2017 was packed with announcements about Facebook’s big technology bets across connectivity, video and virtual reality, and AI.

Among the updates from Facebook’s Connectivity Lab were three new records in wireless data transfer—demonstrating a record point-to-point data rate of 36 Gbps over 13 km with millimeter-wave technology, and 80 Gbps between those same points using our optical cross-link technology. That’s up to 4,000 ultra-high-definition videos simultaneously. The Connectivity Lab team also used the technology to demonstrate 16 Gbps simultaneously in each direction from a location on the ground to a circling Cessna aircraft over 7 km away. This ground-to-air record modeled, for the first time, a real-life test of how this technology will be used.

The team is also developing what they call a Tether-tenna, a small helicopter tethered to a fiber line and power, that can be deployed in natural disasters or other situations when infrastructure has been damaged or destroyed but some fiber lines still work. When completed, the technology will be able to be deployed immediately and operate for months at a time to restore connectivity in emergencies.

In his keynote, Facebook CTO Mike Schroepfer introduced the newest designs the Facebook Surround 360 family: x24 and its smaller counterpart, x6. Both have six-degrees-of-freedom (DoF) designs that leverage the latest in Surround 360 technology to allow viewers to move up, down, left, right, forwards, and backwards with pitch, yaw, and roll inside the experience to see content from different angles. x24 and x6 are part of an integrated, post-production toolchain designed in partnership with leading post-production companies and services. Facebook is working to democratize the 6DoF experience, bringing an end-to-end workflow to creators, giving more people a chance to experience 6DoF in the future.

At Facebook, we care not only about giving people the power to create, but the power to share. Starting today, people can more easily share their VR experiences with the 360 Capture SDK. With the new SDK, VR experiences can be captured in the form of 360 photos and videos instantly and then uploaded to be viewed in News Feed or a VR headset. Now, people no longer need the power of a supercomputer to capture their VR experiences. The SDK is compatible with multiple game engines, but also works on baseline recommended hardware for VR without compromising quality or speed.

The final piece of our end-to-end video update is improvements in our 360 video encoding and playback experience. We shared a new gravitational view-prediction model that uses physics and heatmaps to better predict where to deliver the highest concentration of pixels in every frame of a video. This model improves resolution on VR devices by up to 39 percent. We’re also testing a new encoding technique called content-dependent streaming, which delivers a high-quality video using a single stream, without knowing where the viewer is looking. This is made possible with an AI model we developed that can intuit the most interesting parts of a video to support prediction for streaming to VR and non-VR devices in the absence of heatmap data.

We also open-sourced the first production-ready release of Caffe2 — a lightweight and modular deep learning framework emphasizing portability while maintaining scalability and performance. Caffe2 is deployed at Facebook to help developers and researchers train large machine learning models and deliver AI-powered experiences in our mobile apps. Now, developers will have access to many of the same tools, allowing them to run large-scale distributed training scenarios and build machine learning applications for mobile. We’re committed to providing the community with high-performance machine learning tools so that everyone can create intelligent apps and services. Caffe2 is shipping with tutorials and examples that demonstrate learning at massive scale which can leverage multiple GPUs in one machine or many machines with one or more GPUs. Learn to train and deploy models for iOS, Android, and Raspberry Pi.

You can find the rest of the Day 2 keynote and breakout session videos at fbf8.com. Thank you everyone who came for a great event, and we hope to see you next year!

Source link

Write a comment