A TV Guide is often the most complex view component within such applications. That’s both in terms of navigation, with its multidirectionaly controls, and in terms of performance, with thousand of programs loaded asynchronously.
Throughout my career, I had the chance to implement TV Guides using Qt, JS Canvas and UIKit. The solution based on UIKit is most probably the most complex UI that I have ever created for the Apple ecosystem.
Even though the TV Guide solution at Zattoo has evolved significantly, adding a lot of navigation improvements to handle different type of remote controls, its initial implementation was based on the learnings from this great post by Kyle Andrews.
Having SwiftUI in the wild for some several years already, I could not resist creating a proof of concept for a TV Guide using this new UI library.
The basis for this proof of concept are similar to the one created by Kyle Andrews back in the days for UIKit. It consists on building a multidirectional grid of Programs. To make it more dynamic, each program can vary its size, being either narrow or wide.
You can find here below the code implementation for the view corresponding to each program:
struct ProgramView: View {
var isNarrow: Bool
@Environment(\.isFocused) var isFocused: Bool
var body: some View {
ZStack {
Rectangle().foregroundColor(isFocused ? .pink : .secondary)
Text("Program Title")
.foregroundColor(.white)
.lineLimit(1)
}
.frame(width: isNarrow ? 250 : 100, height: cellHeight)
}
}
In addition to programs, we also need to show an initial column featuring all the Channels. In the following code you can find the implementation for the Channel view:
struct ChannelView: View {
var channelNumber: Int
var body: some View {
ZStack {
Rectangle().foregroundColor(.black)
Text("Channel \(channelNumber)")
.foregroundColor(.white)
.lineLimit(1)
}
.frame(width: channelsWidth, height: cellHeight)
}
}
With our components in place, we can proceed with the implementation of our TV Guide. As you can see right below, the implementation of a TV Guide using SwiftUI is quite straightforward using scrollable stacks:
import SwiftUI
private let numberOfRows = 100
private let visibleNumberOfRows = 5
private let cellHeight: CGFloat = 100
#if os(tvOS)
private var channelsWidth: CGFloat = 150
#else
private var channelsWidth: CGFloat = 100
#endif
struct ContentView: View {
var body: some View {
VStack {
Spacer()
Text("The amazing SwiftUI Program Guide")
.font(.largeTitle)
Spacer()
ScrollView(.vertical) {
HStack(alignment: .top, spacing: 4) {
LazyVStack(alignment: .leading, spacing: 2) {
ForEach(0..<numberOfRows, id:\.self) {
rowIndex in
ChannelView(channelNumber: rowIndex)
}
}
.frame(width: channelsWidth)
ScrollView(.horizontal) {
LazyVStack(alignment: .leading, spacing: 2) {
ForEach(0..<numberOfRows, id:\.self) {
rowIndex in
LazyHStack(alignment: .top, spacing: 2) {
ForEach(0..<30, id:\.self) {
columnIndex in
ZStack {
ProgramView(isNarrow: (rowIndex + columnIndex).isMultiple(of: 2))
}
#if os(tvOS)
.focusable()
#endif
}
}
}
}
}
}
}
.frame(height: CGFloat(visibleNumberOfRows) * cellHeight)
.edgesIgnoringSafeArea(.horizontal)
}
.frame(maxHeight: .infinity)
}
}
With this implementation, we have achieved a multiplatform TV Guide solution. In the following videos, you can observe its functionality on both iOS and tvOS.
I have to say that I am really impressed by the minimal code required to develop a functional TV Guide prototype with SwiftUI. This is by far the easiest framework that I had the chance to use for building this type of component.
You can easily compare this with the code required to build the same component using UIKit in the post by Kyle Andrews back in the days.
However, even though the prototype was that easy to build, I have a lot of concerns about its viability for a real-world applications. Unfortunately live data extends beyond just handling two different types of program durations. Dynamic data can bring a lot of edge cases requiring extensive optimizations. With SwiftUI having a higher level API, incorporating these necessary optimizations could be harder, if not unfeasible, without having to jump to a custom UIKit implementation.
]]>However, because this file is part of a specific project, it is still a common issue to end up pushing unwanted files by mistake. Pushing the famous .DS_Store file is a good example of this.
The good news is that we can easily solve this issue by defining a global .gitignore file that applies to all our projects.
To create a global .gitignore file, we need to create a file listing the files that we want to ignore globally. For example, we can use the file ~/.gitignore_global with the following content:
*~
.DS_Store
After creating the global .gitignore file, we need to configure Git, using the file ~/.gitconfig, for it to know that this file should be use as a global git ignore.
[core]
excludesfile = ~/.gitignore_global
You can find a working example of this configuration in my dotFiles repo.
]]>For various reasons, that I do not really care about, there has been a lot of controversy about Twitter lately. In the technical community, this situation led to some relevant people replacing Twitter with Mastodon.
Mastodon is an open-source decentralized service for microblogging. It is part of the Fediverse, which is a group of services based on federated servers.
These days I read a lot about this concept of Fediverse and the different services that are currently available. If you want to know more about these topics, I recommend you read the articles of Fediverse and Mastodon in Wikipedia.
The Fediverse sounded quite interesting to me, so I decided to give Mastodon a try.
I have to say that I am not a very active person on social networks. Even though I still have a Twitter account, just to avoid somebody else getting my username to impersonate me, the truth is that I barely use it.
The only use case for me to use Twitter nowadays is to share my blog posts right after publishing them on my website. However, you will rarely find tweets in my timeline. This is because a couple of days after publishing anything on Twitter, I go back there again just to delete it.
Yes, I know, this could be automatized somehow, but since this situation happens around two or three times per year, I did not really want to waste any time on this.
In the last weeks, I have been using Mastodon as I would use Twitter in the past. I shared some thoughts, shared articles and conferences that I found relevant, I followed some people with the same interests as me, and I interacted with others by liking, replying, or republishing the publications that I found interesting.
To be honest, the experience was quite straightforward and not very surprising. The only thing that I found confusing was the way that searching works.
This is because to avoid harassment, searching in Mastodon is limited to hashtags.
This was an interesting experiment. I think that it was a good experience to return to social networks for a few days to understand them better and to remember the reasons why I decided to stop using them back in the day.
Even though Mastodon is politically and technologically different from Twitter, the truth is that it is not that different on a regular basis. Because of this, I think that it still suffers the same issues that made me leave Twitter.
As it happens with any other social network, using Mastodon offers a public log with too much personal information. Using Mastodon in a standard way means regularly sharing what you like, what you read, what you think, where are you, where are traveling to, etc… and this is all information that I am not willing to share publicly.
As it happens with Twitter, Mastodon is also based on individuals. In both networks you are not following topics but persons. This leads to people with a lot of followers becoming tuitstars (or mastodostars?).
This level of fame is not something that everybody can cope with, and it is very easy to become arrogant or end up sharing way too much information as a result. I think it happened to all of us that we followed a tuitstar developer and we end up knowing every single detail about their life.
One of the main benefits of leaving social networks back in the day was to be able to focus on the present and not on sharing this moment publicly. Nowadays for example, if I go to a restaurant, I like that I can focus on the food and the people that are with me, instead of trying to make the best photo to share in my timeline.
I also noticed that using social networks ruins my creativity. All these moments of silence and boredom are when my brain activates and it starts creating ideas. You can not imagine the amount of bugs that I have fixed while taking a shower or on the train looking though the window on my way to the office.
Filling these moments with social networks means not giving my brain any time slot for creativity.
With this experiment, I learned once again that I am happier without social networks. In practice, I do not really care if they are open source or decentralized, I simply do not like the way that I feel when using them and I do not like the impact that they have on my life.
So, what am I doing next?
Basically, the same thing that I did back in the day with Twitter. I am deleting all my posts in Mastodon, logging out, and continuing with my life.
I already adopted healthier alternatives to be up to date such as Books, RSS feeds, and Podcasts, and I am happy to know that leaving Mastodon means that I will have again more time to focus on those options instead.
]]>You are watching some video in a quiet environment, you set the volume to the lowest possible value and it is too loud, so you press the volume down button once, and now your device is muted.
On some other day, you are at a party listening to music with the volume set at 100%, you change it to 80% and you can barely notice any difference.
How can it be that a single volume level means too much in the first case, but changing it from 100% to 80% feels like nothing in the second one?
The reason is the difference between the way developers implemented the volume controls in comparison to the way that our ears perceive loudness.
Volume controls are quite often implemented in a linear way, in contrast to the human ear which perceives loudness logarithmically.
This is a well-known issue by audio engineers or game developers who often work with logarithmic audio scales, such as decibels (dB), that take into account logarithmic audio perception.
After years working as a Software Engineer in the TV industry, I can assure you that this is not a very popular knowledge in the sector, and this is probably the reason for having it that often implemented wrong.
Once you know about it, implementing a volume control logarithmically is actually a quite easy task.
As a reference, in the case of iOS Development, we could implement it with a simple extension for AVPlayer:
extension AVPlayer {
var logarithmicVolume: Float {
get {
return sqrt(volume)
}
set {
volume = pow(newValue, 2)
}
}
}
From now on we could use logarithmicVolume
instead of volume
to get a logarithmic volume control.
If you are interested in further details on this topic, I recommend you checking this Wikipedia page out.
]]>One of the features that Apple announced for this remote control was the possibility of scrubbing content in AVPlayerViewController, using a circular gesture similar to the classic click wheel of the first iPods.
Sadly for developers, Apple did not present any new API to make it easier for us developers to adopt this new gesture in our Apps. In addition to that, when I explicitly asked about how to implement it in the Apple developer forums, I was told to use AVPlayerViewController as the only way to get this gesture.
This post is the result of my attempts to capture this new gesture.
As we already saw in “Directional clicks on tvOS”, there is no way to get the precise location of the finger in the digitizer of Siri Remote from an instance of UITouch. In fact, in order to avoid people creating pointer-based applications, the coordinates of any gesture in Siri Remote always start from the center of the touchpad wherever you actually start the gesture from.
Nevertheless, thanks to the GameController SDK we can have a lower level of abstraction with the controllers engine. And, lucky for us… it does allow getting the absolute directional pad values from the controllers (in our case, from Siri Remote)
In the following code snippet, you can find an example of how we can capture and log the exact location of the finger on the digitizer.
import UIKit
import GameController
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
setUpControllerObserver()
}
// MARK: - Private
private func setUpControllerObserver() {
NotificationCenter.default.addObserver(self, selector: #selector(controllerConnected(note:)), name: .GCControllerDidConnect, object: nil)
}
@objc
private func controllerConnected(note: NSNotification) {
// Only considering a single controller for simplicity
guard let controller = GCController.controllers().first else { return }
guard let micro = controller.microGamepad else { return }
micro.reportsAbsoluteDpadValues = true
micro.dpad.valueChangedHandler = {
[weak self] (pad, x, y) in
print("[\(x), \(y)])
}
}
}
From the logs of the previous code we can find out that our space of coordinates looks like the following image:
Now that we know our working environment better, the next step is detecting if the user is touching the outer ring or the internal part of the digitizer.
We can do that calculating the radius from the center of the digitizer to the user’s finger, and then discarding the gestures that are too close to the center.
Using the following image as reference, we want to filter the gestures with a radius in the red area.
This is something we can do with the following code. After testing with multiple values, I found out that a threshold of 0.5
works pretty well.
// Get the distance from the center of the digitizer to the gesture location
let radius = sqrt(x*x + y*y)
// Discard gestures out of the ring area of Siri Remote
guard radius > 0.5 else {
return
}
The next step is to give some visual feedback to the users, for them to understand that the gesture is working. For this playground project, I opted for adding a hint view with a transparent background in top of an image of Siri Remote.
If you want to use this solution in your project, you will need to adjust this part to your visual requirements.
With some simple trigonometry, we can easily calculate the angle that we should rotate the hint view.
The following code snippet is the code doing the actual maths and the rotation.
// Rotate hintView to the appropriate radians
let cos = x / radius
let sin = y / radius
let radians = atan2(sin, cos)
hintView.transform = CGAffineTransform(rotationAngle: CGFloat(-radians))
To make it easier to understand what is happening behind the scene, here you can see another version using a colored background.
So far, we already have a pretty good-looking result. But in practice we will also need the direction of the gesture, that is at the end the information that will allow us to change the value in a slider.
The following code shows the maths to get and to log the direction of the circular gesture.
let normalizedRadians = (radians + (2 * .pi)).truncatingRemainder(dividingBy: 2 * .pi)
let radiansOffset = normalizedRadians - self.currentRadians
let normalizedRadiansOffset = (radiansOffset + (2 * .pi)).truncatingRemainder(dividingBy: 2 * .pi)
print(normalizedRadiansOffset > .pi ? "➡️" : "⬅️")
For better understanding, please find here a project showing a working implementation of the method described in this post.
]]>One of the use cases where I see a potential for AR technology is in the world of video playback Apps.
If you think about it… who needs a huge physical TV if you could cast your content to a virtual tv that you can put wherever you want.
That’s the reason why I decided to check how to create a video player with RealityKit, and I found out that it is quite easy.
The first we need to do to create a AR Video Player with Reality is to create a standard instance of AVPlayer to play our video asset.
let url = URL(string: "https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/bipbop_16x9_variant.m3u8")!
let playerItem = AVPlayerItem(url: url)
let player = AVPlayer(playerItem: playerItem)
player.play()
And then we need to create a ModelEntity with our instance of AVPlayer as VideoMaterial.
let screenMesh = MeshResource.generatePlane(width: 0.7, height: 0.5)
let videoMaterial = VideoMaterial(avPlayer: player)
let modelEntity = ModelEntity(mesh: screenMesh, materials: [videoMaterial])
And that’s all we need to create a RealityKit player, now we only need to place our ModelEntity in the AR world. To do that we need to create a AnchorEntity that defines a location in the AR World.
A simple way to create a AnchorEntity is by using UITapGestureRecognizer, once we have the location of the gesture on the screen, we can translate it to world coordinates using the method raycast of ARView.
@objc
private func tapWasReceived(recognizer: UITapGestureRecognizer) {
let location = recognizer.location(in: arView)
let results = arView.raycast(from: location, allowing: .estimatedPlane, alignment: .horizontal)
if let firstResult = results.first {
let anchorEntity = AnchorEntity(world: firstResult.worldTransform)
addScreen(anchorEntity: anchorEntity)
}
}
In the following code, you can find a ViewController with everything put together.
import UIKit
import RealityKit
import AVFoundation
import ARKit
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
setUpView()
setUpTapDectection()
}
override func viewDidLayoutSubviews() {
arView.frame = view.bounds
}
// MARK: - Private
private lazy var arView: ARView = {
let arView = ARView()
return arView
}()
private func setUpView() {
view.addSubview(arView)
}
private func setUpTapDectection() {
let tapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(tapWasReceived(recognizer:)))
arView.addGestureRecognizer(tapGestureRecognizer)
}
private func addScreen(anchorEntity: AnchorEntity) {
let url = URL(string: "https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/bipbop_16x9_variant.m3u8")!
let playerItem = AVPlayerItem(url: url)
let player = AVPlayer(playerItem: playerItem)
let screenMesh = MeshResource.generatePlane(width: 0.7, height: 0.5)
let videoMaterial = VideoMaterial(avPlayer: player)
let modelEntity = ModelEntity(mesh: screenMesh, materials: [videoMaterial])
anchorEntity.addChild(modelEntity)
arView.scene.addAnchor(anchorEntity)
player.play()
}
// MARK: - Action
@objc
private func tapWasReceived(recognizer: UITapGestureRecognizer) {
let location = recognizer.location(in: arView)
let results = arView.raycast(from: location, allowing: .estimatedPlane, alignment: .horizontal)
if let firstResult = results.first {
let anchorEntity = AnchorEntity(world: firstResult.worldTransform)
addScreen(anchorEntity: anchorEntity)
}
}
}
At Zattoo, we are not an exception, and we also had to implement this playback limitation in our Apps.
Sadly, when trying to implement this behaviour in tvOS, we found out that the only way provided by Apple to prevent forward-seeking content is by using the property requiresLinearPlayback of AVPlayerViewController.
The problem is that setting this property does not only prevent seeking the content forward but also backward.
Trying to limit the functionality to our users as little as possible, we made some investigations and we found out a way to limit seeking forward while still offering the possibility to seek the content backward.
The solution is based on the method timeToSeekAfterUserNavigatedFrom of AVPlayerViewControllerDelegate.
This delegate method is called when the user of our App tries to seek in an instace of AVPlayerViewController, and it allows returning a custom target time to seek.
Implementing this method, we can disable forward seeking simply returning the current playback position when the user tries to seek forwards.
extension PlayerWithoutFastForwardViewController: AVPlayerViewControllerDelegate {
func playerViewController(_ playerViewController: AVPlayerViewController, timeToSeekAfterUserNavigatedFrom oldTime: CMTime, to targetTime: CMTime) -> CMTime {
guard let currentItem = playerViewController.player?.currentItem else { return targetTime }
let isForwarding = targetTime.seconds > oldTime.seconds
if isForwarding {
return currentItem.currentTime()
}
return targetTime
}
}
Notice that we are not returning oldTime
but the current playback position (currentItem.currentTime
). This is because otherwise, the user could still skip the content by pressing the fast forward button several times in a row in their remote controls.
Implementing the previous method is not enought, because at this point the user could still skip content triggering a long press on the right button of Siri Remote. This gesture activates the fast forward mode in the player.
To avoid fast forward content this way, we need to define and to use a custom AVPlayerItem that overrides the values of canPlayFastForward
and canPlaySlowForward
to return false.
class PlayerItemWithFFDisabled: AVPlayerItem {
override var canPlayFastForward: Bool {
false
}
override var canPlaySlowForward: Bool {
false
}
}
With this change, the user won’t be able to fast forward any more using this gesture either. And that’s all, we are done, except for one detail…
At this point, we managed to achieve our goal. Our users will not be able to skip content forward anymore while we still offer them the possibility to seek backward.
But this solution has a problem, it looks like a bug. When the user tries to skip content forward we send them back to their current position and they do not know why.
To make it easier to understand what’s going on, at Zattoo we decided to show a toast message with a hint message explaining the reason why we are sending the user back.
You can find here a project showing a working implementation of the method described in this post.
]]>Because the interface of CMTime is horrible, and its documentation is even worse, here you have a few use cases to make it easier to work with CMTime in a daily basis.
Apart from its initializers, a CMTime is often created with the function CMTimeMake.
This function creates a CMTime with a duration of value/timescale seconds.
Examples:
CMTimeMake(value: 1, timescale: 1) // 1 second
CMTimeMake(value: 1, timescale: 2) // 0.5 seconds
CMTimeMake(value: 2000, timescale: 1000) // 2 seconds
CMTimeMake(value: 2000, timescale: 3000) // 0.6666666 seconds
CMTime has a value property, but because its actual value depends on the timescale of the CMTime, it is not very useful in daily basis. Instead it is usually better to use CMTimeGetSeconds.
CMTimeGetSeconds is a function that returns a float value representing a CMTime in seconds.
Example 1:
let time = CMTimeMake(value: 2000, timescale: 1000)
let seconds = CMTimeGetSeconds(time)
print(seconds) // 2.0
Example 2:
let time = CMTimeMake(value: 1, timescale: 5)
let seconds = CMTimeGetSeconds(time)
print(seconds) // 0.2
As a syntax sugar 🍭 you can also get the seconds from a CMTime value from the property seconds of CMTime.
let secondsWithCMTimeGetSeconds = CMTimeGetSeconds(time)
let secondsWithSyntaxSugar = time.seconds
print(secondsWithCMTimeGetSeconds == secondsWithSyntaxSugar) // true
All it says is No overview available.
, but here you have the documentation from Apple as a “reference”:
https://developer.apple.com/documentation/coremedia/cmtime/1489443-seconds
Apart from the numerical values that one could expect from CMTime. It can also take other fun values such as NaN (Not a Number) or Infinite.
The function isFinite
can be useful to deal with these cases.
You can find here below an example of this use case. Contrary to expected, the following code will print NaN, even though time has no value. This is because the fallback string is never used because of time.seconds does not return nil but NaN.
let time = CMTime()
print(time.seconds ?? "I'm sorry but I do not have any value for you")
Here you can see how we can use the function isFinite to deal with this problem:
let time = CMTime()
let seconds = time.seconds
print(seconds.isFinite ? seconds : "I'm sorry but I do not have any value for you")
struct ContentView: View {
var body: some View {
Form {
Section {
Toggle(isOn: $myToggleValue) {
Text("My Toggle")
}
}
}
}
// MARK: - Private
@State private var myToggleValue: Bool = false
}
But my use case was a little bit more complex. I did not have to display a single Toggle but a list of them, being the source of data a dynamic Dictionary that could contain a random number of elements.
My first attempt was trying to do it by extending the single Toggle example with a ForEach that iterates all the keys of my Dictionary:
struct ContentView: View {
var body: some View {
Form {
Section {
ForEach(allKeys, id: \.self) {
key in
Toggle(isOn: $myToggleValues[key]) {
Text("My Toggle")
}
}
}
}
}
// MARK: - Private
@State
private var myToggleValues: [String: Bool] = [
"One": false,
"Two": true,
"Three": true,
"Caramba": false,
]
private var allKeys: [String] {
return myToggleValues.keys.sorted().map { String($0) }
}
}
But sadly, it did not work. If you try to run this code, you will discover that it does not build. The compiler will show the following error in the line defining the Toggle:
Cannot convert value of type 'Binding<Bool?>' to expected argument type 'Binding<Bool>'
The error makes a lot of sense. The compiler is saying that $myToggleValues[key]
is not guaranteed to have a Bool value. This is because the key
could not be actually in the dictionary and Swift trying to access a nonexisting key in a dictionary will return nil
.
This is a situation where creating a custom Binding can be useful. Instead of using $
to get a standard Binding, I solved my problem by creating a custom Binding.
By defining a custom Binding you have the option to do whatever you want, for example to fallback to a false
value in case the key is not found in the dictionary.
Here you can see the final result:
Here you have the code:
import SwiftUI
struct ContentView: View {
var body: some View {
Form {
Section {
ForEach(allKeys, id: \.self) {
key in
Toggle(isOn: binding(for: key)) {
Text(key)
}
}
}
}
}
// MARK: - Private
@State
private var myToggleValues: [String: Bool] = [
"One": false,
"Two": true,
"Three": true,
"Caramba": false,
]
private var allKeys: [String] {
return myToggleValues.keys.sorted().map { String($0) }
}
private func binding(for key: String) -> Binding<Bool> {
return Binding(get: {
return self.myToggleValues[key] ?? false
}, set: {
self.myToggleValues[key] = $0
})
}
}
In order to get this in-stream timed metadata from the client, we can make use of AVPlayerItemMetadataOutputPushDelegate.
Please notice that even though getting access to timed metadata used to be straightforward by observing the property timedMetadata of AVPlayerItem. Recently, with the release of iOS 13.0, Apple marked this property as deprecated.
You can find here below an example of a simple player catching and logging timed metadata to the console using AVPlayerItemMetadataOutputPushDelegate.
The example uses one of the example streams from Apple which contains timed metadata (time code every 5 seconds).
And here is the code
import UIKit
import AVFoundation
class PlaygroundPlayerViewController: UIViewController, AVPlayerItemMetadataOutputPushDelegate {
// MARK: - UIViewController
override func viewDidLoad() {
super.viewDidLoad()
setUpPlayerLayer()
let stream = URL(string: "https://devstreaming-cdn.apple.com/videos/streaming/examples/bipbop_16x9/bipbop_16x9_variant.m3u8")!
play(url: stream)
}
// MARK: - AVPlayerItemMetadataOutputPushDelegate
func metadataOutput(_ output: AVPlayerItemMetadataOutput, didOutputTimedMetadataGroups groups: [AVTimedMetadataGroup], from track: AVPlayerItemTrack?) {
if let item = groups.first?.items.first
{
item.value(forKeyPath: #keyPath(AVMetadataItem.value))
let metadataValue = (item.value(forKeyPath: #keyPath(AVMetadataItem.value))!)
print("Metadata value: \n \(metadataValue)")
} else {
print("MetaData Error")
}
}
// MARK: - Private
private var playerLayer: AVPlayerLayer!
private var player: AVPlayer!
private var playerItem: AVPlayerItem!
private func play(url: URL?) {
guard let url = url else { return }
let asset = AVAsset(url: url)
playerItem = AVPlayerItem(asset: asset)
player = AVPlayer(playerItem: playerItem)
let metadataOutput = AVPlayerItemMetadataOutput(identifiers: nil)
metadataOutput.setDelegate(self, queue: DispatchQueue.main)
playerItem.add(metadataOutput)
playerLayer.player = player
player.play()
}
private func setUpPlayerLayer() {
playerLayer = AVPlayerLayer(player: player)
playerLayer.frame = view.bounds
view.layer.addSublayer(playerLayer)
}
}
The output in the console looks like this:
Metadata value:
*** THIS IS Timed MetaData @ -- 00:00:00.0 ***
Metadata value:
*** THIS IS Timed MetaData @ -- 00:00:05.0 ***
Metadata value:
*** THIS IS Timed MetaData @ -- 00:00:10.0 ***
Metadata value:
*** THIS IS Timed MetaData @ -- 00:00:15.0 ***
Metadata value:
*** THIS IS Timed MetaData @ -- 00:00:20.0 ***