With social media seemingly fuelling organizing inside opposition political parties, the ruling party ZANU PF has been caught up in the storm of trying to use the same space to annihilate it’s nemesis.
According to a Twitter report from Kuda Musasiwa there is a cache of video footages purportedly obtained from various lodges in Harare that is in the hands of the government.
As it stands these are mere rumors. We are not sure how the handler of the videos wants to use them for. Or if he is really politically aligned and intending to score politically by using them.
The developments are so worrying because noone knows what these videos will be used for. Kembo Mohadi resigned from the presidium after a leak of conversations with different women detailing intimate relationships. As such it is very tempting to conclude that the are ulterior motives behind the harvesting of the videos.
This sad obtaining however, comes hot on the heels of an act of Parliament the Cybersecurity and Data Protection Act that was signed into law on 06 December 2021.
Some of offences set by this law include electronic communication and materials include “transmission of data, messages inciting violence or damage to property, sending threatening data/message, cyberbullying and harassment, the transmission of false data/message intending to cause harm, spam, the transmission of intimate images without consent, production and dissemination of racist and xenophobic material, and identity-related offences.” Such provisions makes it look like the threat of the release of such videos is very less likely,unless someone untouchable does so.
Who will be allowed to go against the law and go Scott free?
If they are going to be released what precedence will it set?
While anything is possible in Zimbabwe, it is sad that politics is taking forever to mature and become people responsive.
However, while it is so easy to sweep away the possibility of the presence of the so called 300gigabyte of damaging video footages, artificial intelligence vultures can use it for ulterior motives ahead of the 2023 elections.
There has been a disturbing trend in Europe where cyber technology is being used to create and broadcast a dangerous deepfake videos.
A case in point being a video where wartorn Ukrainian president Zelensky appeared to urge his country people to surrender to Russia.
Closer home we have seen some videos with voice overs where president Emmeson Mnangagwa appears to be saying he has resigned and another one he appears asking people to vote his rival Nelson Chamisa as president.
Such videos have a potential of growing in numbers. They might be casually used to sway political outcomes in 2023. Social media is growing and it’s traffic is high. The battlefield might be the social media platform.
Deepfakes are the most dangerous form of crime through artificial intelligence.
The term “deepfake” refers to a video where artificial intelligence and deep learning – an algorithmic learning method used to train computers – has been used to make a person appear to say something they have not.
Deepfake content is a danger for a number of reasons: a prominent one being that it would be difficult to find.
This is because while deepfake detectors require training through hundreds of videos and must be victorious in every instance, malicious individuals only have to be successful once.
A second reason is the variety of crimes deepfakes could be used for, such as discrediting a public figure by impersonating.
The long-term effect of this could lead to a distrust of audio and video evidence in general, which the researchers say would be an inherent societal hard.
Top researcher Dr Matthew Caldwell said the internet is now a way of life and artificial intelligence vultures prey on that ecological space.
“People now conduct large parts of their lives online and their online activity can make and break reputations”
“Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity”
“Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”
While not deepfakes themselves, they have shown that there is a market for video content that could diminish the appearance of political opponents.
As well as deepfake content, the researchers found five other crimes that utilised artificial intelligence which could be of a high concern in the future.
Whether or not the 300 gigabyte footage will be released,a storm is brewing.
Source : ByoNews24