According to reports, Congress has implemented stringent limitations on the utilization of ChatGPT and other similar generative AI tools. Axios claims to have obtained a memorandum from Catherine Szpindor, the administrative chief of the House of Representatives, which outlines specific conditions for the use of ChatGPT and comparable large language AI models within congressional offices. Szpindor asserts that staff members are only permitted to use the paid ChatGPT Plus service due to its enhanced privacy controls, and even then, solely for “research and evaluation” purposes. The technology is not to be integrated into their daily workflow.
Moreover, House offices are only authorized to employ the chatbot when dealing with publicly accessible data, even if they are using the ChatGPT Plus version. To ensure that interactions do not contribute to data feeding into the AI model, privacy features must be manually activated. At present, the use of ChatGPT’s free tier, as well as other large language models, is strictly prohibited.
We have reached out to the House for comment and will provide updates if we receive a response. It is not surprising to witness the establishment of a usage policy such as this, as institutions and companies have issued warnings regarding the potential for accidents and misuse associated with generative AI. For example, Republicans faced criticism for employing an AI-generated attack ad, and there were allegations that Samsung personnel inadvertently leaked sensitive information while using ChatGPT for work purposes. Educational institutions have also banned these systems due to concerns over cheating. The House’s restrictions are intended to mitigate comparable issues, such as the creation of AI-written legislation and speeches.
The House policy is unlikely to encounter significant opposition, as both sides of Congress are actively pursuing AI regulation and governance measures. Representative Ritchie Torries has introduced a bill in the House that mandates disclaimers for the utilization of generative AI, while Representative Yvette Clark seeks similar disclosure requirements for political advertisements. Senators have conducted hearings on AI and proposed a bill to hold AI developers accountable for harmful content generated using their platforms.
Frequently Asked Questions (FAQs) about AI restrictions
What are the limitations imposed by Congress on the use of AI models like ChatGPT?
Congress has implemented strict limitations on the utilization of AI models such as ChatGPT. According to a memo obtained by Axios, staff members are only allowed to use the paid ChatGPT Plus service for “research and evaluation” purposes. The use of ChatGPT or similar AI tools as part of their everyday work is prohibited. Additionally, the chatbot can only be used with publicly accessible data, and privacy features must be manually enabled to prevent data feeding into the AI model. The free tier of ChatGPT and other large language models are currently not allowed under these restrictions.
Why has Congress implemented these restrictions?
The restrictions on AI tool usage in Congress are in response to concerns regarding accidents and misuse associated with generative AI. Instances such as an AI-generated attack ad used by Republicans and alleged sensitive data leaks through ChatGPT by Samsung staff have raised concerns about potential risks. These limitations aim to prevent similar problems, including the creation of AI-written legislation and speeches.
Are these restrictions expected to face opposition?
The House policy is not anticipated to encounter significant opposition, as both sides of Congress are actively working towards AI regulation and governance. Representative Ritchie Torries has introduced a bill requiring disclaimers for the use of generative AI, while Representative Yvette Clark seeks similar disclosures for political ads. Senators have conducted hearings on AI and proposed a bill to hold AI developers accountable for harmful content generated using their platforms.
How do these restrictions impact privacy and data protection?
The limitations imposed by Congress address privacy concerns by permitting the use of ChatGPT Plus with enhanced privacy controls. Staff members are required to manually enable privacy features to prevent interactions from contributing data to the AI model. By using the paid service, which offers tighter privacy controls, Congress aims to ensure the protection of sensitive information and data.
Can other large language models be used under these restrictions?
No, the restrictions not only apply to ChatGPT but also encompass other large language models. Currently, the use of the free tier of ChatGPT and any other comparable AI models is prohibited in congressional offices. The restrictions seek to provide a consistent approach across different AI tools to maintain control and mitigate potential risks.
More about AI restrictions
- Axios article: Congress Limits Staff Use of AI Models Like ChatGPT
- Representative Ritchie Torries’ bill: Bill Text – 117th Congress (2021-2022) – H.R.1962
- Representative Yvette Clark’s proposal: Bill Text – 117th Congress (2021-2022) – H.R.2956
- Senate hearings on AI: Senate Committee on Commerce, Science, and Transportation – Hearings
- AI accountability bill proposed by Senators: Bill Text – 117th Congress (2021-2022) – S.2633
3 comments
congress seems puttin’ some restrictions on AI models like ChatGPT! it’s like they don’t wanna use it all the time, only for “research and eval” staff can use paid ChatGPT Plus, not the free one! gotta be careful ’bout privacy controls and no data feedin’ into the AI! they wanna prevent accidents and misuse like AI-generated attack ads, leaks of sensitive data! good they’re tryna regulate AI, tho! accountability is important, ya know?
Congress is gettin’ strict with ChatGPT and generative AI. Staff can only use the paid ChatGPT Plus, with limited access to publicly available data. Gotta manually enable privacy features to avoid data feedin’ into the AI. No free tier allowed, or any other large language models. They’re concerned ’bout accidents and misuse. Regulatin’ AI seems to be the trend, with bills introducin’ disclaimers and disclosures. Senators holdin’ hearings too. Privacy and accountability matter!
So Congress is limitin’ the use of ChatGPT and similar AI tools for their staff. Only the paid version allowed, with tighter privacy controls. Gotta use it for “research and eval” only, not for everyday work. They even need to manually enable the privacy features to stop data feedin’ into the AI model. No free tier or other large language models allowed. Makes sense, they don’t wanna risk accidents or misuse. They’re tryin’ to regulate AI, and that’s a good thing!