Aspera Moves Mediacorp’s Miles of Files
2017 was a big year for Singapore’s national broadcaster, Mediacorp, with the launch of its new HQ in the city’s Mediapolis precinct. As well as a new building and studios, there is a new “engine” under the hood in the form of a comprehensive file-based workflow.
Powering this engine are IBM Hybrid Cloud solutions, including IBM Aspera Orchestrator, part of the Aspera platform, which manages video and audio for more than 35 Mediacorp brands, including seven television and 11 radio stations as well as newer over-the-top (OTT) distribution systems.
Orchestrator now provides a common, streamlined solution for Mediacorp to collect, manage and distribute video across all systems, including on-premises systems and public clouds. Orchestrator helps manage more than 200 workflows and automatically handles parts of the process, such as encoding video for different formats, to help reduce manual tasks and speed time to market. In addition, IBM Aspera Faspex and Aspera Shares transfer files between internal sources, global sites and external vendors.
The new IBM Aspera platform being used by Mediacorp now handles the end-to-end production process for video — ingest, asset management, transcoding, quality control and distribution for various platforms, including linear programming on TV and radio stations and OTT systems. It helps manage the workflow sequencing, queuing and load balancing, while providing a comprehensive view and control point for different processes, users and subsystems.
According to Steve Pollini, Vice-President of Technology Solutions with IBM Aspera, building new headquarters gave Mediacorp the opportunity to expand its use of Aspera solutions from that of high-speed file transfer to collaboration.
“In terms of the overall goal, Mediacorp really wanted to improve their processes and modernise the operation and then position themselves for the evolving trends – not only being able to handle linear, but Over-the-Top more effectively. They wanted to lower their operational costs and reduce errors and effectively provide deterministic kind of errors and that’s where, when they just put out their tender, that’s what they were looking for in terms of workflow management and coordination. So naturally, we have quite a lot of experience with that with broadcasters around the world doing quite a number of large projects of similar scale, and we proposed Aspera Orchestrator.”
So, what is the scope of the Orchestrator implementation?
“They have about 20 major subsystems and various vendor systems that they use for transcoding, QC, subtitling, their MAM, they have their own scheduling and planning system, media analysis, virus scanning, etc. The way a lot of folks in the past have tied these things together is via a scheduling system which would invoke the MAM and kick off a process that was sort of cascaded through till something popped out the other end and it was ready for feeding to the live system.
“One of the reasons I think that they appreciated what Orchestrator could bring is that not only would it be able to coordinate the workflows, manage all of their assets or the technical assets, but also effectively operate like an enterprise service bus which was quite central to the request that they had within their tender.
“The idea of the enterprise service bus is that you can tie it together, everything from program acquisition and scheduling, metadata processing, together with distribution, rights, ad sales, VOD planning and then, at some point, as they evolve, they can tie it into their finance and broadcast resource management systems.
“In addition to that, instead of kicking off the process and then waiting for something to pop out at the other end, they wanted a comprehensive end-to-end view of everything that was going on in the system. So, instead of a request going from the scheduling system into the MAM and then it cascading through the system, everything started with a request from scheduling system which would invoke Aspera Orchestrator.
“In order to accomplish their end goals, we created approximately 250 workflows within Orchestrator and, of those, 50 of them are mainline workflows and the other 200 are what we call “sub-workflows”. Let’s say you were going to do transcoding for linear and you’re also going to do transcoding for Over-the-Top, the transcoding and QC portion of that could constitute a sub-workflow that could be called by each of the multiple mainline workflows. So, once you design and build that, you test it, it’s like an object, a sub-routine, and you’ve got it and you can reuse it with other workflows.
“By using Orchestrator as a hub, it knows the status of every step of every flow within the system. Therefore, we could create dashboards that management could use to know the status of the entire system. You would also know the status of any assets that were flowing through the system, whether it was successful, whether it was meeting their SLAs, or if there was a problem within the system. So, you have both management level dashboards and you have the normal, technical displays and dashboards that allow your technical staff to do investigation and to research it.
“I think the other positive aspect of Orchestrator is that it’s scaled along with their system and, even more importantly, it lets them reuse their resources to best advantage. For example, you can have a pool of transcoders. Then, depending on the nature of the content and the metadata associated with it, you can queue the content for transcoding, but you can also prioritise it. In addition, if you’ve got something like it’s got to get through really fast, like news content, and you’ve got long form content that typically takes anywhere from 10 minutes to 30 minutes to transcode the program, you can reserve one or a couple of transcoders just for that and Orchestrator will automatically route the content to the high priority queue, which will feed it to the transcoders that are used for doing the short programs that have to get through basically instantaneously.”
“Essentially, what we created for them was a fully integrated system to improve the processes and allow them to avoid the silos that they’ve had in years past with all these various subsystems effectively having manual handoff or watch folders between them to move content along through the system.”
In terms of security requirements, is the focus of that protecting content, protecting their whole system, or a bit of both?
“It’s actually both. As an example, anytime you receive new content or metadata from an outside party, the first thing that would happen is as that content would land within a DMZ, in a server, storage within a DMZ. Before that content can be moved into the broadcast zone, the production zone, Orchestrator would automatically have it virus scanned and if it passed it would then allow that content to be ingested into the production zone. If not, it would be quarantined, an alert would be sent to the relevant team to manually handle that content, and to potentially notify the folks that sent it to them.
“They actually have some fine-grained security requirements that you can’t handle with normal storage permission. So, what they did was they licensed Aspera Shares, which is effectively a web app which uses the Aspera transfer protocol to move content around, but it also allows you to control access to various places and content. So, let’s say, you’ve got a sports group, you’ve got a news group, you’ve got your regular other content for movies, etc., and because of internal rules or regulatory issues, you’ve got to protect that content from users within the different groups. When you set up a Share is like setting up a project and you allow a certain group of users access to that and you can do that through say, Access Directory Permissions or LDAP (Lightweight Directory Access Protocol), and then that way you get to manage the permissions and control the access of that content internally. You’ve got both the external security, not allowing penetration into your network with your normal network security measures, you’ve also got a virus scanning and then internally, by using Aspera Shares, they control access to the content, depending on what group permissions you have.”
Mediacorp also operates quite a number of radio stations. Are they utilising Aspera for that, or is that a completely separate silo?
“No, absolutely, it’s tied in through Orchestrator, right, because they’re using products like RCS Sound Software, Netia, Media software, and so that ties in, just like Harmonic, Vantage, EVS, Avid, you know, Grab Valley, Baton, Dalet, they all tie in via Orchestrator and that’s where Orchestrator serves as a functional enterprise service bus for them and provides all the workflow orchestration for all the processes and all the subsystems. That way they’ve gotten away from the silo approach that they had in the past.”
Does that allow different TV and radio channels to share content?
“Absolutely. Let’s say that the television station didn’t already have access to a piece of content, you’d be able to kick off the workflow to have a transfer from their radio system into the subsystem that handles all the video.
“They’re using Dalet for the MAM, so that also stores the metadata for everything and one of the things that Orchestrator does is it allows you to tie together, with the radio system, with the transcoder, et cetera, because they all have different metadata requirements and their APIs are all slightly different in terms of the way that they operate, and so Orchestrator facilitating that by acting as the glue and basically doing any transformations that are required of the metadata from one system’s format into another system’s.”
Subscribe to Content+Technology magazine here