Nuclear sewage discharge into the sea has attracted heated discussions! The Japanese government was exposed to use AI weapons to monitor the "false information" of the whole network in real time

Shin Ji Won reports
Editor: Aeneas is sleepy
Some media broke the news that as early as last year, the Japanese government began to use AI tools to detect remarks related to the discharge of nuclear sewage in Fukushima, and responded within hours.
In the past few days, the news that Japan has officially begun to discharge nuclear-contaminated water into the Pacific Ocean has attracted widespread attention.
Just before the release, media reported that the Japanese government began using AI tools last year to monitor any speech related to the planned discharge of nuclear water from the Fukushima nuclear power plant.
In June, the AI spotted a South Korean media report claiming that senior officials from Japan's Ministry of Foreign Affairs had made huge political donations to the International Atomic Energy Agency (IAEA).
Within hours, the Japanese government responded, refuting the report as "baseless" in both English and Japanese.
As previously reported by Nikkei Asia, Japan's Ministry of Foreign Affairs launched a new AI system in 2023 to collect and analyze information on platforms such as social media, as well as track the influence of public opinion in the medium to long term.
It is worth noting that the framework includes information not only for Japanese audiences, but also for Japan in other countries and regions.
Event review
In March 2011, an earthquake and tsunami destroyed the cooling system of the Fukushima Daiichi Nuclear Power Plant, causing nuclear fuel to melt down in three reactors and radioactive material to leak continuously. The ensuing widespread pollution forced tens of thousands of people to evacuate.
To suppress the overheated reactor core after the explosion, more than 1.3 million cubic meters of seawater have since been used for cooling.
The contaminated water was also collected and stored in more than 1,000 stainless steel tanks on the site.
Among the 64 radioactive elements that cause pollution, the main ones that pose a threat to human health are: carbon-14, iodine-131, cesium-137, strontium-90, cobalt-60 and tritium-3.
To treat this wastewater, TEPCO has adopted its own Advanced Liquid Treatment System (ALPS), which is divided into five stages: co-precipitation, adsorption, and physical filtration.
However, such large amounts of water also make continuous storage increasingly difficult.
In April 2021, the Japanese government officially approved the discharge of this treated nuclear wastewater into the sea.
Despite concerns from several countries and international organizations, this has not stopped Japan from moving forward with the plan.
At the same time, Japan's Ministry of Foreign Affairs has also begun using AI to monitor online reports of radioactive substances in nuclear effluent and to dilute the concentration of this information by producing a large number of propaganda materials.
On July 21, Japan's Ministry of Foreign Affairs tweeted an animated video explaining the safety precautions taken in Japanese, English, French, Spanish, Russian, Arabic, Chinese and Korean.
The video explains how the plant's water is purified in accordance with regulatory standards through an Advanced Liquid Handling System (ALPS). It is emphasized that the discharged nuclear sewage has been diluted 100 times by a factor of seawater before being released into the wider ocean area.
AI monitors speech
In fact, this technology of monitoring Internet public opinion has long been very in-depth and extensive exploration in the field of AI.
One of the hottest is the use of a combination of algorithms, machine learning models and humans to deal with "fake news" posted on social media.
A 2018 Twitter study showed that fake news stories were 70% more likely to be retweeted by humans than real news.
At the same time, real news takes about 6 times longer to reach a group of 1,500 people, and most of the time it rarely reaches more than 1,000 people. In contrast, popular fake news can reach as many as 100,000 people.
To this end, Meta launched a new AI tool Sphere in 2022 to ensure the accuracy of information.
Sphere is the first AI model capable of scanning hundreds of thousands of references at once to check if they support the corresponding claims.
Sphere's dataset includes 134 million public web pages. It relies on the internet's collective knowledge to quickly scan hundreds of thousands of web citations for factual errors.
Meta says that Sphere has scanned all pages on Wikipedia to see if it can find sources of citations that are not supported on the page.
When Sphere finds suspicious sources, it recommends stronger sources or corrections to help improve the accuracy of entries.
Previously, many AI systems have been able to identify information that lacks citation sources, but Meta's researchers say that singling out dubious claims and determining whether cited sources really support such claims requires "deep understanding and analysis by AI systems."
The development of Sphere marks Meta's efforts to address misinformation on the platform.
For several years, Meta has been heavily criticized by users and regulators for misinformation spread on Facebook, Instagram and WhatsApp. CEO Xiaoza was even summoned to Congress to discuss the issue.
Discover fake news and explore social media communication patterns
In Europe, there's also the Fandango project, which is building software tools to help journalists and fact-checkers detect fake news.
Whether it's PS or DeepFake, Fandango's system can reverse engineer these changes, using algorithms to help journalists spot tampered content.
In addition, the system will find pages or social media posts with similar words and opinions based on fake news that has been flagged by fact-checkers.
Behind this system is the support of various AI algorithms, especially natural language processing.
Bronstein, a professor at the University of Lugano in Switzerland and Imperial College London in the United Kingdom, uses an atypical AI approach to detect fake news.
The project, called GoodNews, upends traditional AI tools for detecting fake news.
Historically, these tools analyzed the semantic characteristics unique to fake news, but they often ran into obstacles, such as platforms like WhatsApp being encrypted and not allowing access.
In addition, many times fake news may be images, which are difficult to analyze using natural language processing techniques.
So Professor Bronstein's team turned the traditional model upside down and instead studied how fake news spreads.
The results show that fake news may be shared far more than likes on Facebook, while regular posts tend to have more likes than shares. By spotting such patterns, GoodNews attaches credibility scores to news items.
The team's first model used graph-based machine learning, trained on data from Twitter, and some of the messages above were proven false by journalists.
From this, they trained AI algorithms to teach the model which stories are false and which are not.
Multimodal DeepFake detection, so that AIGC forgeries have nowhere to hide
In addition to pure text, the rapid development of visual generative models such as Stable Diffusion has also made the problem of DeepFake more serious.
In multimodal media tampering, the faces of important people in various news reports (the face of the French president in the figure below) are replaced, and the key phrases or words in the text are tampered with (the positive phrase "is welcome to" in the figure below is tampered with as the negative phrase "is forced to resign").
In order to cope with the new challenges, the researchers proposed a multimodal hierarchical tampering inference model, which can detect the cross-modal semantic inconsistency of tampered samples by fusing the semantic features between the inference modals.
Currently, the work has been included in CVPR 2023.
Specifically, the authors propose a multimodal hierarchical multi-modal manipulation rEasoning tRansformer (HAMMER).
This model is based on the model architecture of multimodal semantic fusion and inference based on the two-tower structure, and the detection and positioning of multimodal tampering are realized in a fine-grained hierarchy through shallow and deep tampering inference.
The HAMMER model has the following two characteristics:
1. In shallow tampering reasoning, the semantic features of the image and text single-modal extracted by the image encoder and the text encoder are aligned by the Manipulation-Aware Contrastive Learning of tampering. At the same time, the single-modal embedding feature is used to use the cross-attention mechanism for information interaction, and the local patch attentional aggregation mechanism is designed to locate the image tampering area.
2. In deep tampering reasoning, the modal perception cross-attention mechanism in the multimodal aggregator is used to further integrate the multimodal semantic features. On this basis, special multi-modal sequence tagging and multi-modal multi-label classification are carried out to locate text tampering words and detect more fine-grained tampering types.
Experimental results show that compared with multimodal and single-mode detection methods, the HAMMER proposed by the research team can detect and locate multimodal media tampering more accurately.
From the visualization results of multimodal tamper detection and positioning, HAMMER can accurately perform tamper detection and positioning tasks at the same time.
In addition, the model attention visualization results of tampered words further show that HAMMER performs multimodal tampering detection and localization by focusing on image regions that are inconsistent with the semantics of tampered text.
Resources:
https://asia.nikkei.com/Business/Technology/Japan-taps-AI-to-defend-against-fake-news-in-latest-frontier-of-war#
https://asia.nikkei.com/Business/Technology/Japan-deploys-AI-to-detect-false-info-on-Fukushima-water-release
https://www.youtube.com/watch?v=jrM0mw8gp-Y
https://www.mofa.go.jp/press/release/press1e_000443.html
https://tech.facebook.com/artificial-intelligence/2022/07/how-ai-could-help-make-wikipedia-entries-more-accurate/
https://arxiv.org/abs/2304.02556
https://arxiv.org/abs/1902.06673