“Model Collapse” is a real danger to AI – NY Times: Hashtag Trending for Thursday, August 29, 2024

Share post:

Amazon Unveils New AI-Powered Alexa Features with Subscription Model, California is moving ahead with AI regulation and Russia says our internet and GPS connections are “fair game” and while the New York Times hasn’t said AI is dead, it does point out a mortal threat.

All this and more on the “Sounds of AI Silence” edition of Hashtag Trending. I’m your host, Jim Love. Let’s get into it.


The Washington Post has reported that Amazon is announcing a update to its Alexa voice assistant, incorporating advanced AI capabilities designed to make the device more interactive and human-like.

The new Alexa will feature enhanced natural language processing, enabling it to understand and respond to more complex commands and questions. This upgrade will be available as part of a subscription service called “Alexa Plus,” marking a shift from Amazon’s traditional model of free updates for its smart devices.

The new AI-powered Alexa aims to improve user experience by offering more personalized interactions and more contextual understanding of user requests. For instance, Alexa will now be able to carry on more natural conversations, remember details from previous interactions, and provide more nuanced responses. These updates are built on Amazon’s new AI models that have been trained to better understand human language and intent.

The introduction of a subscription model is a notable move, potentially signaling a shift in how tech companies monetize smart home technology. Amazon’s decision could pave the way for other companies to explore similar revenue models. The update is expected to roll out later this year, giving users a preview of the future of smart home interaction.

If Amazon makes this work, it will mark a significant shift in the smart home industry towards subscription-based services and demonstrates the growing influence of AI in everyday technology. The move could have broad implications for privacy, data usage, and consumer expectations of smart home devices.

Sources include: Washington Post

“Recent developments suggest Russia may be targeting critical Western communication infrastructure, potentially threatening global internet and GPS systems.

Dmitry Medvedev, deputy chairman of Russia’s Security Council, recently warned that undersea cables enabling global communications could be legitimate targets for Russia. While Medvedev is known for provocative statements, experts believe this threat should be taken seriously.

These undersea fiber-optic cables transfer 95% of international data across continents, supporting internet services, financial transactions, and more. NATO’s intelligence chief, David Cattler, has warned that Russia may be planning to target these cables in retaliation for Western support of Ukraine.

Simultaneously, Russia has been accused of interfering with GPS navigation systems, causing disruptions to commercial airline routes. This is seen as part of Russia’s ‘gray zone’ campaign against the West – covert actions below the threshold of open warfare.

The vulnerability of these systems is not new. During the Cold War, both the US and USSR surveilled undersea cables. However, our increased dependence on electronic communications has made these cables a critical point of vulnerability.

Experts argue that current protective measures are insufficient. While NATO has begun taking action to safeguard undersea cables, more robust government fallback plans are needed. The Center for Strategic and International Studies has called for increased international cooperation to coordinate responses to potential attacks on cables.

This situation underscores the urgent need for countries to develop resilience plans and alternatives to keep critical communications operational if key infrastructure is compromised. It also highlights the complexities of holding perpetrators accountable for sabotage in international waters.

As our reliance on connectivity and space data grows across various sectors, from agriculture to food delivery, the potential for disruption through interference with subsea cables and GPS becomes an increasingly serious threat to national and economic security.”

Sources include: Business Insider

We’ve reported that California’s Senate Bill 1047, aimed at regulating artificial intelligence, has sparked debate in the tech industry. The bill would require AI developers to comply with certain rules before developing their models.

But the bill moves forward and has passed the state Senate and now faces an August 31 deadline for Assembly approval.

Key players are divided: Anthropic cautiously supports the amended bill, while OpenAI opposes it, arguing it could hinder innovation. Elon Musk has voiced support, stating that AI’s public risk justifies regulation.

The bill has undergone significant changes, including the removal of criminal penalties and adjustments to legal standards. It now applies to models costing at least $10 million to develop and fine-tune.

Critics, including some members of Congress and industry groups, argue that federal regulation would be more appropriate. Bill sponsor Senator Scott Wiener counters this, stating, ‘I reject the false claim that in order to innovate, we must leave safety solely in the hands of technology companies and venture capitalists.’

As Congress has yet to act on AI regulation, California’s move highlights the growing role of states in shaping tech policy. The outcome of SB 1047 could set a precedent for AI governance in the U.S.

Sources include: Axios

And finally, we’ve done a lot of stories about the conflict between content producers and AI companies using their content to fuel AI systems. These systems have an insatiable need for content at levels that are almost unimaginable.

Not too long ago, Sam Altman from OpenAI was musing about being able to solve this issue with “synthetic data” – data created by AI to train AI. You might have noticed that he’s not talking about that now and in fact, OpenAI is trying to sign agreements with content producers to use their content.

Why? Well, a story in the New York Times may hold the key. The article, talks about “A concerning trend is emerging in the world of artificial intelligence: the potential for AI to consume and learn from its own output, leading to a degradation in quality and diversity of information. This phenomenon, known as ‘model collapse,’ could have far-reaching implications for the future of AI and the information we consume.

While AI model developers are looking for more and more new content, AI-generated content is flooding the internet. OpenAI alone produces about 100 billion words per day, equivalent to a million novels.

As AI companies train new models, they’re more and more likely to inadvertently ingest some of this AI-generated content, creating a feedback loop.

Research shows that when AI is trained on its own output repeatedly, the quality and diversity of its results can significantly decrease. This has been demonstrated with text, images, and even simple tasks like generating handwritten digits.

The problem, known as ‘model collapse,’ occurs because AI-generated data is often a poor substitute for real, human-generated data. It lacks the nuance and diversity found in genuine information.

The New York Times article has a link to some graphic examples. It’s well worth looking at and I don’t think you need a subscription to view a single article although they do make you log in. I’ll post the link on our show notes at technewsday.com

This issue could lead to real problems and poses challenges for AI companies. As high-quality, diverse data becomes scarcer, it may slow down AI development and make it harder for new companies to compete. Worse, if model collapse starts to occur, these large models could conceivably grind to a halt or lose the confidence of the public as data accuracy becomes questionable.

Solutions being explored include paying for high-quality data, developing better AI detection methods, and using human curation of AI-generated content.

Is it the death knell of AI? That’s certainly possible, but unlikely. It is, however, a real issue that has to be dealt with if these AI models are going to continue to grow.

Sources include: The New York Times

And that’s our show for today. You can find show notes at our news site technewsday.com or .ca take you pick.

Hashtag Trending is on summer hours so there’s no morning news edition tomorrow, but our weekend show will be released early on Friday.

Thanks for listening. I’m your host Jim Love, have a Thrilling Thursday.

 

……

 

SUBSCRIBE NOW

Related articles

Fortinet data breach loses 440 GB of data. Cyber Security Today for Monday, September 16, 2024

Welcome to Cyber Security Today. I'm your host, Jim Love. On today's show: • Fortinet confirms a data breach after...

OpenAI proposes major changes to their corporate structure

A new study suggests AI could affect 60 million jobs in North America within a year. Generative AI's...

Strategies for Ransomware Response with Imran Ahmad: Hashtag Trending REPLAY

In this episode, Howard Solomon interviews Imran Ahmad, a partner at Norton Rose Fulbright and co-head of the...

Cyber Security Today Week in Review for September 14th, 2024

Cybersecurity Insights: Vulnerabilities, Insider Threats, and the Future of Online Safety In this weekend edition of Cybersecurity Today, host...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways