The world is increasingly not as it seems. With the growth of digital technologies, eye trickery has never been more sophisticated, and with each passing year, the saying "seeing is believing "becomes less true. To highlight this issue, United Kingdom TV network Channel 4 recently released a deepfake video of the Queen performing a TikTok dance as part of her Christmas address, provoking a mix of confusion, offense and delight among British viewers. Shown as the genuine monarch gave her traditional Christmas Day speech, the Channel 4 digital fake was intended to warn viewers that what we see and hear is not always as it seems. Deepfakes are causing headaches for regulators around the world. Image authentication software for photo-shopped images are gradually becoming adopted by law enforcement agencies worldwide, but deepfaked videos are a whole new level of deception. Such videos could make you appear to say something you did not, or place you somewhere you were not. The fact that AI generates these trickeries and continues to learn and improve also means that they will inevitably beat conventional detection technologies. Steps to combat this are already in place. Software such as the recently-announced Microsoft Video Authenticator can analyze footage and give a percentage confidence score of how likely it is that it has been faked. Companies such as Snapchat and Tencent have also invested millions in AI detection technology. Analyzing the blending boundary between pixels and subtle irregularities that may not be detectable to the human eye, this kind of software is a step in the right direction for ensuring that we can trust our online news sources. Unfortunately, though, it is not guaranteed that such efforts will be a permanent solution. The rate at which deepfake sophistication is growing each year means big tech and regulators alike will inevitably be drawn into a resource intensive detection arms race, always trying to stay one pixel ahead of misinformation evolution. Stronger policies must be enacted to deter such activity in the first place. One of the first governments to act has been in China, where a new policy bans the publication of deepfakes without a disclaimer notifying the viewer that it is fake. Failure to do so constitutes a criminal offence. Harsher crackdowns on this disruptive technology may be necessary, as the perils of fake news increases exponentially as our reliance on social media grows. Countries in Europe and North America could take note of China's lead. The Cyberspace Administration of China said in a statement: "With the adoption of new technologies such as deepfake, in online video and audio industries, there have been risks in using such content to disrupt social order and violate people's interests creating political risks and bringing a negative impact to national security and social stability." Legal frameworks elsewhere in the world need to catch up. For example, currently if you are in the UK and discover that someone has created a deepfake video of you doing something you did not, there are currently no existing laws banning such deception to help you. Your best bet would be to hope that English case law protecting against commercial misappropriation of your public image may work, however the case would still never be treated in a criminal context, despite being hugely damaging. The alarming nature of how fast deepfakes spread is testament to the cross-border nature of the internet, and exploits the lack of international framework of laws governing image rights and media. What we need is such a system, where countries can collaborate with each other to fight the global spread of misinformation. Ensuring domestic legal frameworks are up to the job first is a good place to start. Barry He is a London-based columnist for China Daily.