TechyMag.com - is an online magazine where you can find news and updates on modern technologies


Back
Software

Chatbot for spies: Microsoft has launched an artificial intelligence model without an internet connection

Chatbot for spies: Microsoft has launched an artificial intelligence model without an internet connection
0 0 1 0

Microsoft has created a generative artificial intelligence model based on GPT-4, specifically developed for US intelligence agencies. It operates without an internet connection.

It has been reported that this is the first time Microsoft has deployed a major language model in a secure environment designed to allow intelligence agencies to analyze top-secret information without the risks of connection, as well as to ensure secure conversations with a chatbot like ChatGPT and Microsoft Copilot.

According to Bloomberg, the new artificial intelligence service (which has not yet been publicly named) aligns with the growing interest of intelligence agencies in using generative AI to process classified data, while reducing the risks of data leaks or hacking attempts. ChatGPT typically operates on cloud servers, which can pose risks of data leaks and interception. Therefore, last year the CIA announced its plan to create a service similar to ChatGPT, but this Microsoft solution is a separate project.

William Chappell, Microsoft's Chief Technology Officer for Strategic Missions and Technologies, noted that the development of the new system involved 18 months of work on modifying an AI supercomputer in Iowa. The modified GPT-4 model is designed to read files provided by users but does not have access to the open internet.

"This is the first time we've had an isolated version - where isolated means not connected to the internet - and it is on a special network only accessible to the US government," Chappell said.

The new service was activated on Thursday and is now available to approximately 10,000 people from the intelligence community. It is ready for further testing by relevant agencies. According to Chappell, it is currently "answering questions."

One of the serious drawbacks of using GPT-4 for analyzing important data is that it can potentially generate inaccurate results, draw incorrect conclusions, or provide inaccurate information to its users. Since trained AI neural networks are not databases and work based on statistical probabilities, they can create poor factual resources if not supplemented with external information access from another source using techniques such as enhanced search generation. Considering this limitation, it is entirely possible that GPT-4 could potentially misinform or mislead US intelligence agencies if not used properly.

Source: arstechnica

Thanks, your opinion accepted.

Comments (0)

There are no comments for now

Leave a Comment:

To be able to leave a comment - you have to authorize on our website

Related Posts