Last Updated on May 26, 2023 by Bitfinsider
This week, Microsoft President Brad Smith joined the growing group of titans of the tech sector who are raising the alarm and urging governments to regulate Artificial Intelligence (AI).
Smith stated, “Government needs to move faster,” at a panel discussion with policymakers on Thursday morning in Washington, D.C., according to The New York Times.
Microsoft’s proposal for regulation comes at a time when regulators are paying more attention to the rapid advancement of artificial intelligence, particularly generative AI technologies.
An artificial intelligence system known as generative AI can produce text, graphics, or other types of media in response to user-provided commands. Several notable instances are Google’s Bard, OpenAI’s ChatGPT, and the image-generation platform Midjourney.
Since the public debut of ChatGPT in November, the call for AI regulation has become more urgent. Prominent individuals have discussed the possible risks of the technology, including Warren Buffett, Elon Musk, and even Sam Altman, CEO of OpenAI. Fear that AI could be utilized to replace human authors is a major contributing element in the ongoing WGA writer’s strike. Video game artists share this concern now that game studios are researching the technology.
Smith advocated for demanding licenses for developers before they can deploy advanced AI projects and argued that “high-risk” AI should only be used in licensed AI data centers.
The Microsoft chief also urged businesses to take charge of controlling the technology that has captured the attention of the globe, implying that it is not just up to governments to manage the possible societal effects of AI.
Smith explained, “That means you notify the government when you start testing, Even when it’s licensed for deployment, you have a duty to continue to monitor it and report to the government if there are unexpected issues that arise.”
Despite these reservations, Microsoft has made a significant investment in AI, purportedly putting over $13 billion into the company that created the ChatGPT chatbot and incorporating the well-liked chatbot into its Bing web browser.
“We are committed and determined as a company to develop and deploy AI in a safe and responsible way,” Smith stated in a post on AI governance. The guardrails required for AI, however, “require a broadly shared sense of responsibility and should not be left to technology companies alone,” the statement continued.
Microsoft unveiled Security Copilot in March, the first specialist product in its Copilot series that makes use of AI to assist IT and cybersecurity experts in identifying cyber dangers by analyzing massive amounts of data.
Smith’s remarks coincide with those made by Sam Altman, CEO of OpenAI, last week during a hearing before the U.S. Senate Committee on the Judiciary. Altman proposed setting up a federal institution to control and establish guidelines for the creation of AI.
According to Altman, he said: “I would form a new agency that licenses any effort above a certain scale of capabilities, and that can take that license away and ensure compliance with safety standards.”
Hardware wallets are safe and secure devices that can be used offline. They keep your cryptocurrency offline, making it impossible for you to be hacked. To find out more on the leading hardware wallets, you may view our reviews here: Ledger & Trezor
Disclaimer: The views and opinions expressed by the author, or any people mentioned in this article, are for informational purposes only, and they do not constitute financial, investment, legal, tax or other advice. Investing in or trading cryptocurrency or stocks comes with a risk of financial loss.