Artificial intelligence (AI) seems to be in the news all the time.

Reasoning, learning, planning, and creativity are now possible with the click of a mouse, thanks to the power of AI, Red Banyan Founder and CEO Evan Nierman notes in a FAST Company article entitled “AI’s looming potential for a crisis.”

In the article, Evan points out that AI is already doing more than just impress. AI is making mistakes that could portend disaster without a crisis management plan in place, the article states.

Read the entire article here:


AI’s looming potential for a crisis

Reasoning, learning, planning, and creativity are now possible with the click of a mouse, thanks to the power of artificial intelligence (AI). The speed and precision of this cutting-edge technology is impressive, but AI is already doing more than just impress. Some search results could portend disaster without a crisis management plan in place.

Machine learning” is proliferating ethnic stereotypes, promoting workplace bias, and sharing racist or fake imagery with searchers. These unforeseen results could negatively impact an individual’s personal image and feelings of self-worth as this sophisticated technology continues to evolve.


AI processes data that has been loaded into a computer with software that mimics human intelligence. AI “learns” by skimming information from the internet, and it processes this information by adjusting its functions and algorithms as needed. It can recognize behavior patterns and errors but does not have the critical thinking skills that are unique to humans.

With machine-like precision, AI generators spit out copy, create artwork, and produce images based on textual prompts. The results come with lightning speed, but users are noticing a dark side: AI is creating images that promote eating disorders, mimic racial stereotypes, and cast people of certain skin colors into certain jobs steeped in bias. Other content may be fake, even though the video or image looks like the real thing.


AI-generated media content that portrays something that never really happened or is misleading is referred to as deepfake. Deepfakes warp our perception of reality with fake sights and sounds that make it difficult to separate fact from fiction. And because technology is continually improving and deepfakes are getting better, it is getting harder and harder to recognize these illusory images.


Deepfakes” use technology to replace the likeness of someone in video or audio with the likeness of someone else. The process is done with AI and commonly edits the face of someone else onto the body of someone in a video, making it appear as if someone said or did something they really had not done or said.

There is no one simple way to determine if something has been created by AI and this increasingly sophisticated technology can be used to create misleading or false videos, target celebrities, shame women, manipulate elections, and create false histories. Awareness of deepfakes and how they may be used is key to preventing them from being widely circulated.


Deepfakes can be used positively to help create training videos, assist law enforcement in recreating crime scenes, and help people protect themselves online from identity theft by allowing them to set up fake accounts with AI generated avatars.

However, deepfakes have been widely used to created online porn, including swapping the faces of celebrities with porn stars to create embarrassing videos that look like the real thing. Deepfakes can be used to spread misinformation by portraying someone doing or saying something they never did. Deepfakes convince viewers to believe things that are not true through fake videos, audio, and messages that closely resemble the real thing. Deepfakes can also be used to create false evidence in criminal cases.


Deepfakes can be used to intimidate and harass, incite civil unrest, and influence elections. AI technology, used nefariously, can undermine public trust in audio-visual content and sow doubt about almost anything that is viewed online. If shared on social media, massive numbers of people can be affected by the misleading or false content, causing great harm to society.

Recent AI searches on weight loss produced disturbing images of severely underweight women while chatbots offered up eating plans that appeared to be pro-anorexia.


The introduction of AI is sure to bring with it many challenges, so it is important to have a crisis management plan should a problem arise for you, your employees, or your brand.

Because AI and deepfakes are evolving quickly, it is important to pay attention, stay informed, and keep abreast of all the latest technology. Understanding how deepfakes work and the different possible ways to detect a deepfake are key to surviving this sophisticated new technology and coming out safely on the other side.

Here are some tips to safeguard your business:

  1. Educate: Educate employees about the threat of deepfakes, and urge them to carefully review all content before sharing, posting, or acting on it. Conduct training to help employees recognize deepfakes and identify potential threats. Deepfake audio often contains noticeable background noise and videos may show unnatural facial movements, problems with voice synchronization, or uncharacteristic voice patterns.
  2. Add An Extra Layer Of Protection: Implement multi-layer authentication across your business as an extra layer of protection.
  3. Monitor Regularly: Regularly monitor your company’s social media footprint for possible threats.
  4. Beef Up Security: Improve your company’s cybersecurity.

If you are victimized by a deepfake or a false image, be ready to take control of the narrative and tell your side of the story so you can set the record straight. This emerging power and influence of AI remains to be seen, but having a plan of action in place is the best insurance possible.

Evan Nierman is CEO of Red Banyan, a global crisis PR firm, and author of Amazon bestsellers The Cancel Culture Curse and Crisis Averted.