Adapting to AI: Regulatory Frameworks for Responsible Media

Nota Staff
Nota Staff

Rapid advancements in generative AI are likely to change the media landscape faster than regulations can keep up with the technology. Some experts predict 90% of online content will be AI-generated by 2025. The implications of AI on the public trust and our institutions require collective actions to regulate AI responsibly. 

Rapid Adoption of Gen AI

Regulating AI may prove tricky as generative AI is one of the fastest adopted technologies in history. According to current data, ChatGPT boasts over 180 million users total and 1.6 billion website visits in December 2023 alone. 35% of companies currently use AI and 42% are exploring its use. As the technology continues to improve and more people and companies adopt AI, any attempt to slow the advancement of AI is futile. Because of this, regulation must focus on using the technology for the greater good. 

Current AI Regulatory Initiatives

The European Union is leading the charge on regulating AI with the US not far behind them. The EU’s AI Act calls for a formal review process for all commercial AI products. The AI Act also restricts the use of AI in facial recognition. The Biden Administration announced a “Blueprint for an AI Bill of Rights” that deals with discrimination, data privacy, transparency, and other public concerns that “have the potential to impact the American public’s rights, opportunities, or access to critical resources or services”.  

Importance of AI Regulations

Regulating AI is the only way to develop AI for the greater good and minimize risks. AI comes with a host of potential risks to users and society as a whole. As the United States approaches the 2024 Presidential Election, deep fakes used to influence voters are a legitimate concern. There are examples of AI chatbots lying, turning racist, and encouraging users to commit suicide. Facial recognition software mis-identifying people’s race or gender could lead to discrimination. The importance of adopting AI regulations–and fast–can not be overstated. 

AI’s Threat to Media Ecosystem

Fake news and propaganda amplified by AI could compromise all of the ways we consume media. In this way, generative AI represents an existential threat to the media ecosystem. Assistive AI, on the other hand, has the ability to enhance the way media is published in a positive way. With assistive AI, humans are the ones who generate the content. Humans need to always be in charge of generating content and sensible regulations will ensure AI follows this model. 

Though regulation of AI is difficult due to the complexities of AI, here are four principles for guiding government leaders and AI experts to consider:

Guiding Principles for AI Regulation 

  1. Informed Input

A strong public-private partnership between regulators and AI experts is crucial. Regulations should be shaped by the people who understand and build AI and implemented by the government. 

  1. Assistive AI

Assistive AI is a tool for everyone that adds value to society. Creating favorable conditions for assistive AI through regulations is one way to steer AI companies in this positive direction. 

  1. Human Accountability

Humans must continue to play an essential role in the functionality of AI. Having a “human-in-the-loop” mitigates risks against innacurate or inappropriate AI. 

  1. Incentives for Innovation and Adoption

Regulations that focus on innovation for AI technology will create a positive working environment with little job loss. The widespread integration of AI into society is inevitable, and regulations will ensure a safer and more efficient product. 

The Future of AI Regulations

The transformative impact of generative AI on the media landscape demands swift and responsible regulations. This is all dependent on government officials and AI experts working together to craft common sense regulations. With the rapid adoption of generative AI in both the private and public sector, the need for effective regulation becomes even more pressing.

NOTA SUM

Related Articles

Nota and Mather Economics are partnering to create PROOF, an AI-based tool designed to streamline publishers' content creation and workflow, providing near real time suggestions based on best practices for SEO-optimized headlines, tags and keyword utilizing deep insights into how well an article is expected to perform using Mather Economics' powerful analytics and data.
Nota Staff
Nota Staff
Nota's AI tools have significantly reduced time spent on content creation and distribution, resulting in an impressive increase in page views for beta partners.
Nota Staff
Nota Staff
SUM is a text summarization tool created by Nota for content teams seeking efficiency in headline generation, summarization, top quotes, key points, SEO keywords, and categorization, providing real-time results and customizable outputs.
Nota Staff
Nota Staff

Request a demo

Your demo request was successful!