Subscribe to our newsletter and stay informed

Check out our list of top companies

Check out our carefully compiled lists of the most relevant and impactful companies within their fields.

Check out our list of top unicorns

Read and learn about the biggest companies that various countries have produced, how they made it, and what the future looks like for them.

Cyber Expert: Generative AI Disinformation Overblown

Cybersecurity experts warn that artificial intelligence-generated content could distort our perception of reality
May 10, 2024

Concerns are growing among cybersecurity experts regarding the potential of artificial intelligence-generated content to distort our perception of reality, especially in the context of critical elections.

However, not all experts agree on the extent of the threat. Martin Lee, the technical lead for Cisco’s Talos security intelligence and research group, believes that while deepfakes are a powerful technology, they may not be as impactful as fake news.

Lee argues that while new generative AI tools make the creation of fake content easier, they also often contain identifiable indicators that suggest they are not produced by real people. For instance, AI-generated visual content can have noticeable flaws, such as unrealistic features or anomalies.

Although distinguishing between synthetic and real voice audio can be more challenging, Lee emphasizes that machine-generated content can often be detected when viewed objectively. Despite this, experts anticipate that AI-generated disinformation will pose a significant risk in upcoming elections worldwide.

Matt Calkins, CEO of enterprise tech firm Appian, acknowledges the potential of AI but believes its usefulness is currently limited. He suggests that AI tools can often be uninteresting and that significant progress is needed before they become truly effective.

Calkins warns that once AI gains the ability to understand and replicate human behavior more accurately, it could become a more effective tool for spreading disinformation. However, he expresses dissatisfaction with the pace of regulation efforts in the United States, suggesting that it might take a particularly egregious incident for lawmakers to take action.

Despite the potential challenges posed by AI-generated content, Lee reassures that there are ways to spot misinformation, whether it's generated by humans or machines. He advises people to be vigilant, verify information from reputable sources, and question the plausibility of emotionally triggering content.

In conclusion, while AI-generated content presents new challenges, existing methods for spotting misinformation remain effective. Vigilance and critical thinking are key to combating the spread of false information in the digital age.

Last related articles

chevron-down linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram