Legislators are back from summer break and ready to act. The new academic year has begun with a flurry of action in AI, shaping up to be one of the most productive seasons for the technology.
Much has changed since I first began covering AI policy four years ago. I had to convince people that the subject was worth their time. Not anymore.
It’s gone from being a super nerdy, niche topic to front page news. Notably, politicians in countries like the US, which have traditionally been reluctant to regulate tech, have now come out swinging with a wide variety of proposals.
On Wednesday, tech leaders and researchers are meeting in Senate Majority Leader Chuck Schumer’s first AI Insights Forum. The forum will help Schumer formulate his approach to AI regulation. My partner Tate Ryan-Mosley breaks down what to expect here.
Senators Richard Blumenthal and Josh Hawley have also said they will introduce a bipartisan bill on artificial intelligence, including laws on licensing and auditing AI, liability laws on privacy and civil rights, as well as data protection. will include transparency and safety standards. They will also create an AI office to oversee the regulation of the technology.
Meanwhile, the European Union is in the final stages of negotiations for the AI Act, and some of the toughest questions about the bill, such as whether to ban facial recognition, how to regulate generative AI, and enforcement How it should work will be eliminated.
Between now and Christmas Even, the G7 leaders decided to join in and agreed to create a voluntary code of conduct for AI.
Alex Engler, a fellow at the Brookings Institution, says that thanks to the excitement surrounding creative AI, the technology has become a kitchen-table topic, and everyone is now aware that something needs to be done. But the devil will be in details.
Engler says that to deal with the harm already caused by AI in the US, federal agencies that regulate health, education and others need the power and funding to investigate and prosecute tech companies. Is.
They proposed a new regulatory instrument called Critical Algorithmic Systems Classification (CASC), which would give federal agencies the right to investigate and audit AI companies and enforce existing laws.
This is not a new idea at all. It was outlined by the White House in its AI Bill of Rights last year.
Say you realize you’ve been discriminated against by an algorithm used in college admissions, jobs, or real estate evaluations.
You can take your case to the relevant federal agency, and the agency will be able to use its investigatory powers to demand that the tech companies hand over data and code about how these models work and review their work.
The regulator can sue if it finds that the system is suffering.
In the years since I’ve been writing about AI, one important thing hasn’t changed: Big Tech’s efforts to subvert the laws that limit its power.
“There’s a bit of misdirection going on,” Engler says. Many of the issues surrounding artificial intelligence — surveillance, privacy, discriminatory algorithms — are affecting us right now, Engler added, but the conversation needs to be focused on tech.
Companies have caught on, pushing the narrative that big AI models pose big risks in the far future.
“In fact, all these risks are much better exposed on online platforms,” says Engler. And these platforms are the ones benefiting from reframing threats as the problem of the future.
Lawmakers on both sides of the Atlantic have a short window to make some very consequential decisions about the technology that will determine how it is regulated for years to come. Hopefully they won’t waste it.
You should talk to your child about AI. Here are 6 things you should be aware of.
Over the past year, kids, teachers, and parents alike have had a crash course in artificial intelligence, thanks to the wildly popular AI chatbot ChatGPT. But it’s not just chatbots that children encounter in schools and in their daily lives.
AI is booming everywhere—recommending shows for us on Netflix, helping Alexa answer our questions, powering your favorite interactive Snapchat filters and unlocking your smartphone.
While some students will always be more interested in AI than others, understanding the fundamentals of how these systems work is becoming a fundamental form of literacy—one that everyone who completes high school will learn. He must know.
At the start of a new school year, here are six essential tips from MIT Technology Review on how to get started on teaching your child about AI.
Bits and Bytes
Chinese AI Chatbots Want to Be Your Emotional Support
How is the new Ernie Bot from Chinese company Baidu, and how does it compare to its Western alternatives? Our China Tech reporter Xie Yang experimented with it and found that it did a lot of hand-holding. Read more in their weekly newsletter, China Report.
Inside Meta’s AI drama: internal disputes over compute power.
Meta is losing top talent left, right and center due to internal squabbling over which AI projects are allocated computing resources. Of the 14 researchers who wrote the MetaKLLAMA research paper, more than half have left the company.
Google will require selective advertisements to display AI content.
Google requires advertisers to “prominently disclose” when a campaign ad “inauthentically portrays” people or events. As the U.S. presidential election approaches, one of the most tangible fears surrounding emerging AI is the ease with which people can use the technology to create deeply faked images to mislead people. The changes will come into force from mid-November.
Microsoft says it will pay its clients’ AI copyright legal fees.
Generative AI has been accused of stealing intellectual property from authors and artists. Microsoft, which offers a suite of creative AI tools, has said it will pay any of its clients if they are sued for copyright infringement.
A scrappy AI startup used cheap human labor to develop 3D models.
Ditch the mechanicals, but make it 3D. Kaedim, a startup that says it uses machine learning to turn 2D illustrations into 3D models, actually uses human artists for “quality control,” and sometimes builds models from scratch.