DAI#40 – Imitation, OpenAI drama, and AI safety scrambles

Welcome to this week’s roundup of human-generated AI news. This week AI offended an actress and lost its voice. Sony doesn’t want AI to listen to its music. And we look inside the “black box” to decode the AI mind. Let’s dig in. Show your work If an AI system always gave you the right answer but you didn’t understand how it worked, would it matter? Even the engineers creating LLMs don’t understand quite how they work. Sam Jeans explores Anthropic’s attempt to change that as its researchers peer inside the “black box” to decode the AI mind. What did The post DAI#40 – Imitation, OpenAI drama, and AI safety scrambles appeared first on DailyAI.

May 24, 2024 - 10:00
 20
DAI#40 – Imitation, OpenAI drama, and AI safety scrambles

Welcome to this week’s roundup of human-generated AI news.

This week AI offended an actress and lost its voice.

Sony doesn’t want AI to listen to its music.

And we look inside the “black box” to decode the AI mind.

Let’s dig in.

Show your work

If an AI system always gave you the right answer but you didn’t understand how it worked, would it matter? Even the engineers creating LLMs don’t understand quite how they work.

Sam Jeans explores Anthropic’s attempt to change that as its researchers peer inside the “black box” to decode the AI mind. What did they find?

I’m speechless

Scarlett Johansson said she was shocked to hear that GPT-4o’s sultry “Sky” voice sounded eerily like hers.

Sam Altman says his request to use her voice, Johansson declining, his tweet referencing the movie “Her”, and Sky sounding similar to Johansson is all pure coincidence.

What do you think? This X post did a great job of summing up the debate.

While that debate continues, there’s still the small matter of ensuring AI doesn’t destroy us all.

It’s becoming clearer that Ilya Sutskever and Jan Leike from OpenAI‘s “superalignment” team may have left the company over safety concerns. What did they see?

The soap opera drama at OpenAI persists as it raises more questions about Altman’s leadership.

You can’t touch this

The Sony Music Group has warned 700 companies including Google, Microsoft, and OpenAI that its music and other content are off-limits for AI training.

Sony: ‘We think you guys used our music. Did you?’
AI company: ‘We’d never do that.’
Sony: ‘Could we take a peek at your training data?’
AI company: ‘Ummm….’

I thought it would be appropriate to let Suno have a shameless go at summing the situation up. It’s terrible. I love it.

The best image, video, or music generators are almost certainly trained on copyrighted data. But does it have to be that way?

Researchers at The University of Texas found a way to train a model to create images without ‘seeing’ copyrighted work.

AI deep fakes rally

Bollywood films may be one of India’s biggest industries but political AI deep fakes as a service is a growing trend with the country’s elections in full swing.

The line between creative political messaging and dangerous AI-generated misinformation is being blurred with potentially serious consequences.

The creators of leading AI models say they put guardrails in place to prevent misuse of their tools but they don’t seem to be working very well.

A UK government study found that all 5 LLMs the researchers tested were “highly vulnerable” to “basic” jailbreaks.

Your job on autopilot

Microsoft unveiled more AI-powered work automation tools at its Build event. With upgrades to Copilot and AI agents now able to handle everyday tasks, your boss may wonder if he still needs you to come in on Monday.

Leading AI companies have agreed to a new set of voluntary safety commitments ahead of the two-day AI summit in Seoul. Maybe they could agree to fund a universal basic income (UBI) from their profits to replace worker’s salaries.

The e/acc supporters will tell you we don’t need to worry about AI safety but Google is clearly more than a little nervous. The company just published its Frontier Safety Framework to mitigate anticipated “severe” AI risks.

The hypothetical scenarios the document describes are chilling. Google’s admission that there are dangers it can’t anticipate is even more so.

Yann LeCun disagrees.

Talking AI

Sam Jeans had a fascinating discussion with Chris Benjaminsen, co-founder and Director of Channels at FRVR, a platform that uses generative AI to create games from natural language.

Sam tried his hand at making 2 games and demonstrated how simple the process is.

Want to try making a game of your own? You can access FRVR.ai’s public beta and start creating your very own games for free here.

AI Events

This week the 14th annual City Week conference in London hosted over 1,000 top-level decision-makers from financial institutions worldwide to discuss how tech like AI is transforming the finance industry.

At the Enterprise Generative AI Summit West Coast in Silicon Valley, California, AI practitioners, data scientists, and business leaders explored how organizations can integrate generative AI capabilities into their organizations.

If you’re considering a trip to the Middle East here’s a great reason to book your ticket. The COMEX Global Technology Show 2024 takes place next week and offers an exciting glimpse into a future shaped by AI, VR, and blockchain.

In other news…

Here are some other clickworthy AI stories we enjoyed this week:

And that’s a wrap.

Do you think Sky sounds like Scarlett Johansson? I’m certain Altman wanted it to, but I don’t really hear it. I hope they bring Sky back.

I’d love to know what OpenAI is working on that caused its superalignment team to jump ship. It must be pretty impressive. And scary.

Did you try your hand at making an AI game of your own? We’d love to try it out. Send us a link and let us know if we missed any interesting AI news.

The post DAI#40 – Imitation, OpenAI drama, and AI safety scrambles appeared first on DailyAI.