Login Sign up
Ethical and Legal Debates on AI-Driven Misrepresentation and Intellectual Theft
Home Blog Ethical and Legal Debates on AI-Driven Misrepresentation and Intellectual Theft
Ethical and Legal Debates on AI-Driven Misrepresentation and Intellectual Theft

Ethical and Legal Debates on AI-Driven Misrepresentation and Intellectual Theft

You may say, “I will believe it if only I see it with my own eyes.” It may have sounded reasonable a decade ago, but in the era of artificial intelligence the question sounds like “Can one actually believe what one sees?” Unfortunately, it is not a joke any longer and the term ‘deepfake’ has become a part of the new language.

It is the scary truth that the advance of technologies that can create deceptive images by altering the real ones is much faster than the human or machine ability to identify fakes. There are new stories every day about the use of technologies for economic and political manipulation; so, a deepfake can use personas everybody trusts and knows. Thus, the opinions are manipulated and decision-making process is influenced by those who can easily benefit from forgery and public deception.

Deepfakes: What Are They?

Britannica defines deepfake as “synthetic media, including images, videos, and audio, generated by artificial intelligence (AI) technology that portray something that does not exist in reality or events that have never occurred.” [1]

Merriam Webster dictionary gives a definition that is also focused on the purpose of deepfakes. “An image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.” [2]

So, deepfake is always associated with some kind of impersonating and misleading. Forbes refers to deepfakes as one of the aspects of the dark side of AI which changes the face of financial fraud and may put reputation under attack and devastate trust. Mimicking facial features and voices, artificial intelligence generates fake audio and video recordings. [3]

What can be the consequences of using such fraud which raise numerous legal and ethical debates?

  • Psychological harm
  • Business and financial disruption
  • Political instability
  • Impact on national security
  • Attack on personal freedom

Deepfake vs. Photoshop / Photo Apps

On the social media, now there are bunches of fake images, and although they may irritate us, but most often do no harm. Some photo apps also offer a funny option of swapping faces for personal entertainment. You may want to see how you would look like as a movie character or an elderly person, and a certain app will generate a surprising photo for you in an unknown image. Is it possible to tell whether such images are original or fake? Yes, definitely. In this case, these photo-altering new technologies are harmless.

However, it is completely different when it goes about deepfakes. Users never know that they are deceived! That brings significant danger in making false images which look genuine and true to life. For instance, one of the Twitter posts in a seemingly verified account shared a fake Pentagon image of explosion which looked absolutely real but actually it had never taken place. The name Bloomberg in the name of the account name suggested that it was an official post from Bloomberg News, but that was one of the components of a deceptive plan which caused a financial market dip, rather brief but profitable for some of the parties.

How to Tell Between a Deepfake Technology and Photoshopped Content?

  • Deepfake images are made with the use of advanced algorithms of machine learning.
  • The purpose of creating deepfake content is mostly misinformation or manipulation that may damage or bring risks to someone.
  • Still, one more reason for concern is that Photoshop tools can be engaged in making fakery even more convincing and true to life.

Spotting a Deepfake: Is That Possible for You?

The recognition algorithms of cyber security tools are getting better on a daily basis. However, even without sophisticated tools, you can identify whether, for example, a video you are watching is real even if you are not an expert.

  1. Check on the source of the video. Who posted the video? Is the source legitimate and trustworthy?
  2. Trace the origin of the video using the search engines. Take an episode screenshot. Use Google images to upload the image. Check on the history of the image: other existing versions, other sources where it can be traced, etc.
  3. Trust your gut feeling. Can the video be true?
  4. Don’t share any information unless you are 100% sure it is not deepfake.

Measures to Be Taken to Address the Deepfake Risks

It looks fun to change the faces, add some fake video or audio for entertainment, or create a fake scenario to surprise friends, but nowadays it seems that the whole world is disrupted with threatening prospects. The speed of deepfake escalation is terrifying and it is no longer a funny game, but the start of a disaster in the digital world. It is a must to be extremely cautious of digital deceit and take measures at all possible levels to be able to be protected against new and new emerging cyber threats.

  1. All states need legislature that will provide specific frameworks for addressing the general risks of fraud done via the digital means.
  2. The laws should specifically address the deepfake challenges.
  3. The governments are expected to provide extensive support to development of sophisticated detection software and tools not only for the authorities, but also for the public use. As plagiarismsearch.com is used to identify all cases of copy pasted, stolen, and AI-generated content, there should be a tool which can manage detecting deepfakes.
  4. The educational establishments should take part in awareness campaigns to keep the citizens informed about the cyber threats and particular nature of deepfakes.
  5. Social media platforms and tech organizations should be encouraged to exert extraordinary effort to ensure detection of deepfakes and prevent their spread.
  6. Global norms and standards should be established to combat new digital challenges and deepfakes; hence, it is a must to start a dialogue between the international bodies and every particular government.

Only an effective combination of technological innovation, legal intervention, international effort, and public raising awareness campaigns can ensure the safety of digital space and integrity in all the spheres of information sphere.

Seeing cannot be believing any longer!

Still, a virus eventually gets a vaccination ready. Gaining an insight into the threats of virtual viruses and deepfakes, you get more and more resilient to attacks.

melissaanderson.ps@gmail.com
Melissa Anderson
Born in Greenville, North Carolina. Studied Commerce at Pitt Community College. Volunteer in various international projects aimed at environmental protection.
Former Customer Service Manager at OpenTeam | Former Company secretary at Chicago Digital Post | PlagiarismSearch Communications Manager
     
Have you got any questions?