Artificial Intelligence Fighter (Artificial Intelligence, Kidnapped My Voice, Teacher Huang Collabor

 

1. Artificial Intelligence Image Robot

Image source @ Visual Chinese | Lei Technology "AI" is the most popular topic since the beginning of 2023. When people are excited about the rise of artificial intelligence, a crisis has also followed. In the past few months, we have seen "AI Q&A", "AI mapping", and "AI face changing", and the exposure of each technology has caused considerable discussion.

2. Artificial intelligence image generation

Before imagining 'AI' as our right-hand man, these tools had become the first 'good partners' of outlaws. McAfee, the world's largest professional security technology company, recently released a survey data that found that over 77% of victims of phone fraud were deceived by' AI Voice '.

3. High definition artificial intelligence image materials

These victims find it difficult to distinguish whether the sound in the call comes from family or friends. Therefore, at the request of unfamiliar calls, they charge money after money to criminals to use "AI". People can easily clone anyone's voice, and in addition to deceiving, it may also appear in any occasion and place. As a result, netizens generally believe that "AI voice" will appear in court sooner or later, becoming the main source of perjury.

4. Artificial intelligence image creativity

This sounds very scary, right? The sound is fake, fraud is real. Friends who often surf the internet must have seen songs created using "AI" on different social media platforms recently, such as "AI Stefanie Sun's" Hair Like Snow "Cover From Jay Chou" and "AI Mouldy" Cover From Jay Chou "

5. Artificial intelligence image materials

AI Jay Chou's "Cover From Tao Zhe" songs created using "AI" have become popular secondary creations among netizens (source: bilibili). In fact, "AI" songs created by "AI" and "AI Voice" fraud cases use the same techniques. The creators use certain tools to import voice materials into them and train them with high-performance graphics cards, which does not require much time, You can easily obtain a piece of audio content that is "fake as real".

6. Artificial intelligence image high-definition

To create a song using "AI", it is also necessary to adjust the tone to ensure that the rhythm and pitch of the audio are consistent with the original song. Of course, in the latest version of the creation tool, "one click processing" can already be achieved, and the effect will not be too poor. The difficulty of "AI voice" lies in handling emotions. In addition to adjusting the rhythm of the simulated audio, it is also necessary to add content changes due to different emotions.

7. Artificial Intelligence Picture Cartoon

In the two real cases mentioned by McAfee, a mother received a kidnapping call from a scam gang. At the other end of the call, her daughter was screaming for help with a similar voice and a strong emotional response. This was an important reason why the victim was "hooked". In March of this year, an AI tool called "Mocking Bird" was born. According to the developer, it can extract human voices from phone calls and videos, Simulate matching using AI algorithms, and finally "piece together" the speech content you need based on the analyzed content.

One of the methods for artificial intelligence images is to use object detection

After being tested by netizens, this tool can indeed produce "AI voice", but the requirements are not low. It requires sufficient samples, preferably clear human voices. Therefore, it is relatively difficult to extract enough sound materials in a single phone call. However, using "AI voice" fraud may not require realistic sound.

One of the methods for artificial intelligence film reading is to use object detection

AI can deceive, not necessarily entirely relying on technology and ruthlessness. When everyone can easily "clone" the voices of others, isn't this world a mess? Has "AI Voice" really made it possible for everyone to freely create? In order to understand the current situation of "AI Voice", I consulted with a senior creator in the field, Professor Meiji, to hear his views on "AI Voice".

The process of artificial intelligence film reading is reflected in

Xiaolei: Teacher Meiji, your research in the field of "AI" is quite in-depth. How do you view the "AI Voice" fraud case? Meiji: Currently, relying solely on a phone call or a video can extract enough material for language training, which is very difficult. From the reported cases, most of the deceived users were in a tense state at that time because they didn't know if the other person was real, and the brain automatically matched the voice of the imagined object, which is also possible.

Xiaolei: That is to say, can't we just rely on "AI" to produce speech that is fake and fake at this stage? Meiji: We see a lot of AI cover content on the internet, as well as a lot of live voice audio from anchors. But have you noticed that these two creations all have one thing in common - "ample samples" - like singers like Stefanie Sun, who are constantly being targeted for AI creation because she has enough sound materials.

Even with sufficient sound materials, there are still high hardware requirements. Even the best consumer grade graphics card, 4090Ti, requires a lot of time to generate models. Xiaolei: If we only use a segment of audio as material to produce "AI voice," can we achieve authenticity? Meiji: As mentioned above, it is difficult to produce so-called "AI voice" content due to insufficient sound samples. Even if forcibly produced, the quality of the results obtained will not be very high.

There are actually many criteria for judging 'falsehood over authenticity', such as children's voices. Most children's voices sound similar, especially on the phone, where unclear speech and common voice lines can confuse the audience. It is not surprising that from simple interviews, we can learn some ideas about the creators of 'AI' content. Overall, 'AI voice' not only has technical support, but also, It still takes advantage of people's fear and panic about the unknown.

And the "AI" tool only increased the credibility of the original phone fraud by a little. When ChatGPT was first released, no one expected such an artificial intelligence Q&A platform to generate so many and powerful functions. Similarly, "AI Voice" is not yet a universal and universal tool, but even at this stage, it is enough to greatly increase the probability of successful phone fraud.

It's hard to imagine how this field will be "played" when the "AI voice" tool is simplified. Anyway, "AI" is indeed posing a threat to our security. "AI" is too dangerous! Before the corresponding regulations were introduced, "AI" could not be considered a safe and reliable tool in any field. Not long ago, a female internet celebrity with millions of fans posted a long article criticizing "AI face changing" for causing damage to her reputation and spirit.

In the article, the internet celebrity revealed that some criminals used this technology to transform her face into an adult video, making her not only "AI face changing" as the female protagonist in the film, but also using the "AI" tool, users can input corresponding keywords at any time to generate any content they want, such as "18 prohibited images", "celebrities" or any "things that haven't happened". Refined images are difficult to distinguish true from false, Affects users' judgment.

In the past, 'there is a picture and the truth' was the main evidence for us to judge the truth of things. Later, combined with dynamic videos and live voice, the criterion of 'video cannot be P' was derived. Today, with the flourishing development of 'AI', images can be self-made, live people can be grafted, and even voice can be cloned by 'AI'.

Although "AI Voice" cannot be fully recognized as the main cause of successful phone fraud, the assistance it provides is considered "risky" by security agencies. Currently, China has introduced the "Regulations on the Management of Network Audio and Video Information Services", which should label "non real audio and video information" and prohibit the use of big data for deep learning, production, and publication of virtual news.

And this is just the beginning. To put shackles on "AI", there is a long way to go. According to the report issued by McAfee, in 2022 alone, the amount involved in "AI voice" fraud reached $2.6 billion, approximately RMB 18 billion. How can we prevent "AI voice" fraud?.

As mentioned earlier, the generation of "AI Voice" requires a large number of cloned object voice samples to reduce the sending of video and audio content with their own real voice on unknown risk social platforms. This is actually the safest solution. In addition, the success rate of "AI Voice" fraud is as high as 77%, which is closely related to people's fear. Before confirming the true identity of the other party, try not to accept their requests, especially transfer funds.

Anyway, at present, "AI Voice" is not as divine as it is circulating online, and we ordinary netizens do not need to be too anxious about our voice being cloned. When encountering situations that may be fraudulent, we should first stay calm, organize our emotions, and clarify our thoughts in order to better respond.

为您推荐

Artificial Intelligence Fighter (Artificial Intelligence, Kidnapped My Voice, Teacher Huang Collabor

Artificial Intelligence Fighter (Artificial Intelligence, Kidnapped My Voice, Teacher Huang Collabor

Directory: 1. Artificial Intelligence Image Robot 2. Artificial intel...

2023-05-30 栏目:科技派

当前非电脑浏览器正常宽度,请使用移动设备访问本站!