I’m not trolling at all. It just isn’t acceptable that you would let this happen. High priority systems should be back operational by now. Even if it means getting new hardware. Sure full recovery would take months. I regard an automation system to broadcast news bulletins to be high priority and such systems should be restored by now.
Getting a station, news show or anything to air after a malfunction or hack or anything takes time as there are so many systems tied together in the background. Regardless of how many backups they have.
If they have a semi functional way to get the highest viewership shows on air they will even if it means taking low viewership shows off air in the short term to redeploy staff or gear.
I know from past experiences at my own work when a critical piece of gear is broken or damaged there is a quick fix then to get new parts or items in can take many weeks to be shipped regardless of the cost.
Anyway! Let’s move on.
… but why would you? it’s been made very clear that there was no ransom demanded …
… and this is the fundamental problem … broadcast systems used to be proprietary and discrete (and very expensive) … now IT outfits like Cisco and others are running around telling broadcasters that they don’t need all of that expensive and reliable hardware anymore (not to mention the staff with decades of experience to run and maintain it) 'cos it can all be done cheaper using this or that software … we are currently witnessing the truth of that scenario at 9’s cost …
I kind of feel it’d be a bit more like Flying High.
“looks like I picked the wrong week to start my new job”
Just because a ransom hasnt been demanded doesnt necessarily rule it out of being ransomware, especially given that the remainder of the hallmarks of ransomware exist
From the SMH:
Independent security researcher Troy Hunt said the details resembled a ransomware attack — where criminals encrypt data to make it inaccessible and then demand money to unlock it. But Nine sources indicated no-one has come forward yet to claim responsibility for the attacks or to make any demands. “Once you start affecting availability, that’s the entire MO of ransomware; make things not available until you pay the money,” Mr Hunt said.
“Particularly over the last year we’ve also seen ransomware attacks where they’re no longer just encrypting the files but they’re taking a copy of the files [for extortion]. But no ransom has been forthcoming, so I don’t know if that makes it ransomware. ”
And you know this how? Cisco arent some fly-by-night company, a lot of companies (and governments) trust Cisco to provide the infrastructure to operate systems
Spot on! Businesses like broadcasters and telco’s have been trying to re-market themselves to investors as online show’s and that’s where the rot has started: They have completely re-socially engineered their workforces to be online focused and most of the old engineers and tech officers have been shown the door. Mobile carriers have had some big outages in recent years that seem to get blamed on outsourcers - in days gone by their own staff would have done that work. Same goes for broadcasters - outages like the one this thread is about never happened prior to these businesses being re-designed. Some have even sold off their transmitter sites and in one case even the transmitters(that will not go well when the 15year contracts come up for re-negotiation - current bosses will have their bonuses and be long gone)
I can find no instances of anyone turning a legacy broadcasting business into a successful online business anywhere.
The CEO of Nine Entertainment was the CEO of Stan.
Ok, lets take a step back and look at why maybe your understanding of IT doesn’t equate to knowing about it in a TV environment.
Until your IT department determines when and which backups are usable, because broadcasting and production still needs to continue at the same time, you basically have to go back to starting from scratch again until a point where you can coordinate a sync/switch back.
Vizrt Mozart is a standard news studio automation system.
Installing their software sure is easy, can take no time at all… but then you have to configure it, make sure that it works with all the different systems that it is required to interact with - cameras, mixing desk, graphics, lighting, audio - all different brands and configurations. It’s not necessarily a difficult task, but it’s a time consuming one.
With automation you have to also build a LOT of instructional templates in your news rundown system, for the different segments you produce in a news bulletin - they can be fiddly and that’s assuming if your rundown system is back up and running.
After those templates have been created, you then need to go through testing - running a bulletin from start to finish, finding all the bugs and fixing them. If they do it right, they’ll also test their fallback measures at the same time to make sure they’re in working order.
This process takes weeks, if not months.
And this is just one system, amplify that to be on a scale of the entire company… it’s not just “plug and play”.
Only time will tell if NEC’s investors get to see the benefits of online. They are the people that actually matter not the exec’s.
You mean, like when they acquired Fairfax Media for their 50% share of Stan and grew NEC’s market cap by something in the order of $4 billion?
When the system was originally built all configurations should be stored by the vendor preventing this. Any changes stored in a configuration management system. Basic ITIL. I used to manage phone systems for major institutions and that is exactly what we did plus have backup drives in our own storage so should something happen we can recover very quickly. We had changes happening all the time. That is basic common sense. Rundown templates that change all the time should be backed up daily by Nine and even better by the vendor. So no it should not be difficult to recover.
Plus when reviewing files in backups you would notice changes in backup sizes as the hashes of the encrypted files would change and need to be backed up even if the file itself hasn’t changed for months. Making it easier to find when it started.
NEC shares were $2.47 on Jul18 and came back up to $3 at the start of March. The latest was $2.81 so it dropped but it would be a big call to say the crypto was to blame. Don’t see much upside for shareholders from the Fairfax acquisition anywhere though.
It was obviously a joke, don’t know how you and the other 8 people missed that
Backups and backup procedures are all well and good, though, until you actually need them and discover that a core component is missing - or misconfiguration… or the hardware changed since the last good backup.
There are so many options and problems that could be happening.
If I remember, it took 10 about 4-6 weeks to recover… it was just less obvious because they have 1 hour of automation a day… even weekends. Their automation also isn’t as intertwined with everything as 9 is.
That’s why you test the procedures at least on a yearly basis to catch these issues.
Nine in Sydney moved less than a year ago.
… because I’ve been associated with the broadcast industry for over fifty years and still subscribe to countless magazines and newsletters that have been saying exactly that … no, they’re not “fly-by-night” as an IT company, but they’re not a broadcast engineering company … and while they’re obviously very good at providing software to “operate systems” that’s not the same as being “the system” which is the problem at 9 … https://itwire.com/networking/cisco’s-digital-infrastructure-built-on-ip-fabric-to-provide-increased-efficiency,-agility-and-flexibility.html