• DarkAI
  • Posts
  • ☢️ Warfare, The Dark Web, and The Hands of Cheaters

☢️ Warfare, The Dark Web, and The Hands of Cheaters

PLUS: Warner Brothers signs its first AI artist.

Welcome To DarkAI

Welcome to the inception of DarkAI. In this issue, we explore:

  • ☢️ Artificial intelligence’s role in warfare

  • 🇺🇸 How AI could interfere with the 2024 election

  • 🎧 Artificial musicians - is this right?

  • ⚡️ AI plagiarism and the failed attempts to detect it

If you’re new, subscribe to receive all future editions of DarkAI! There is plenty in the pipeline.

POLITICS

As the world gears up for a series of pivotal votes in 2024, the looming concern is the potential weaponization of AI in disinformation campaigns. Historically, disinformation has been the domain of humans, but advances in generative AI are changing the landscape, making synthetic propaganda a disturbing possibility. This shift raises questions about the credibility of information and its impact on democratic processes.

Generative AI may significantly alter the disinformation landscape in three key ways.

  1. First, it could amplify the quantity of disinformation, potentially swaying voters on a massive scale.

  2. Second, AI-powered deepfakes could deceive voters before authenticating false content.

  3. Lastly, AI allows for highly personalized microtargeting, inundating voters with tailored propaganda.

While concerns about AI-driven disinformation are valid, it's unlikely to single-handedly dismantle democracy. The power of misinformation is not solely determined by the tools used; it relies on multiple factors, including human discernment and the broader context of information.

Social media platforms and AI companies are increasingly aware of these risks and are taking steps to combat them. Monitoring AI usage, identifying suspicious accounts, and implementing content verification mechanisms are some of the strategies being employed.

While voluntary regulation has its limits, heavy-handed control could stifle AI innovation. In the evolving landscape of disinformation, understanding AI's potential and its limitations is crucial for safeguarding democratic processes.

DARK WEB

In the shadowy corners of the internet, a concerning trend is emerging. Threat actors are increasingly drawn to the capabilities of generative artificial intelligence (AI) tools, and the consequences are far from benign.

Recently, it was revealed that hundreds of thousands of OpenAI credentials have surfaced on the dark web, offering a disturbing peek into the underworld of cybercrime.

The data, disclosed by Flare, a threat exposure management company, is nothing short of alarming. Over the past six months, mentions of ChatGPT on the dark web and platforms like Telegram have surged to over 27,000. Cybersecurity researchers at Flare also uncovered more than 200,000 OpenAI credentials up for sale in the form of "stealer logs." While this number may seem dwarfed in comparison to ChatGPT's estimated 100 million users, it signifies a growing interest among threat actors in the potential of generative AI tools for nefarious activities.

Notably, one cybercriminal has even gone as far as to create a sinister ChatGPT clone known as WormGPT (discussed in our previous DarkAI issue), which has been trained on malware-focused data and brazenly advertised as the "best GPT alternative for blackhat" and a tool that facilitates illegal activities. WormGPT is powered by the GPT-J open-source large language model, and its developer remains tight-lipped about the specifics of the datasets used.

WARFARE

In the realm of cutting-edge military technology, the U.S. Air Force is forging ahead with a groundbreaking project, the XQ-58A Valkyrie. This experimental aircraft, steered by artificial intelligence, holds the promise of offering American forces a strategic advantage in conflicts. While it showcases the potential for AI to revolutionize the battlefield, it also raises ethical questions about the responsible deployment of such potent technology.

The XQ-58A Valkyrie, developed by Kratos Defense & Security Solutions, is gaining traction as a cost-effective alternative for military operations. With each unit estimated to cost around $4 million, this AI-powered aircraft stands in stark contrast to the MQ-9 Reaper drone, which costs approximately $30 million per unit, and the F-35 fighters, priced at roughly $80 million each.

Despite its remarkable potential, the deployment of AI in warfare raises crucial ethical concerns. The level of autonomy granted to lethal AI-run weapons is a point of contention, particularly in light of past concerns about civilian casualties in U.S. drone programs. While proponents believe the Department of Defense has effectively managed these concerns so far, further development may see AI-run aircraft transition from defensive to offensive roles, necessitating a careful balance of objectives, safeguards, and human oversight in this evolving frontier of military technology.

As AI continues to reshape the battlefield, the balance between innovation, security, and ethical considerations becomes ever more critical for the future of warfare.

EDUCATION

OpenAI has provided a candid insight into the limitations of AI when it comes to distinguishing between AI-generated and human-generated content. In their educator-specific FAQ section, OpenAI acknowledges that while some tools claim to detect AI-generated content, none have proven to be entirely reliable. This disclosure is particularly crucial for educators as students increasingly rely on these tools for assignments and homework.

OpenAI emphasizes that even its own AI chatbot, ChatGPT, lacks the capability to recognize AI-generated content. Regardless of the prompts used, ChatGPT's responses regarding content authenticity are random and devoid of factual basis.

OpenAI's attempts to train its AI tools in content detection have encountered challenges, with occasional mislabeling of renowned texts like Shakespeare's works and the Declaration of Independence as AI-generated.

OpenAI also points out that even if such tools were accurate, students could easily make minor edits to evade detection. This underscores the ongoing complexity of AI content detection, an issue with implications for both education and broader AI ethics.

MUSIC

Warner Music Central Europe has raised eyebrows by offering a record deal to a digital character named Noonoouri, who recently released her debut single "Dominoes" featuring DJ Alle Farben. Noonoouri's singing voice, generated with the assistance of artificial intelligence, has been modelled after a real singer's voice but modified to give her a unique sound. While the songwriters and musicians involved in the track will receive royalties and publishing splits, this move raises questions about the integrity of the music industry.

Noonoouri's debut track and music video, where she dons Kim Kardashian's Skims new brand, have sparked debate about the future of music creation and the role of AI in the industry. While Warner Music Central Europe emphasizes that Noonoouri is not entirely AI-generated, the use of AI to shape her singing voice has stirred controversy, calling into question the authenticity and creativity associated with music production.

FEEDBACK

What did you think of this issue?

If you’re reading in an email, hit reply!

If you are reading online, reach out to me on Twitter.

Want to let a co-worker know about this newsletter? Feel free to share:

Reply

or to participate.