找回密码
 立即注册

Is ChatGPT losing its edge? - Stanford-Berkeley study investigates

2023-7-20 17:31| 发布者: admin| 查看: 420| 评论: 0|来自: Proactive Insights

摘要: 最近由斯坦福大学和加州大学伯克利分校进行的研究表明,OpenAI的ChatGPT(聊天型通用预训练模型)的能力正在明显下降,研究人员尚未确定原因。该研究对ChatGPT-3.5和ChatGPT-4模型进行了一系列任务测试,包括解决数 ...

最近由斯坦福大学和加州大学伯克利分校进行的研究表明,OpenAI的ChatGPT(聊天型通用预训练模型)的能力正在明显下降,研究人员尚未确定原因。该研究对ChatGPT-3.5和ChatGPT-4模型进行了一系列任务测试,包括解决数学问题、回答敏感问题、编写新的代码行以及进行空间推理等,以评估不同版本之间的可靠性。

研究人员Lingjiao Chen、Matei Zaharia和James Zou强调了AI模型质量可能在相对短的时间内发生显著变化的潜在可能性,强调了对AI模型质量的持续监控的重要性。

他们建议依赖LLM(语言模型)服务的用户和公司在其工作流程中实施一种监控分析形式,以确保一致的性能。


OpenAI's ChatGPT is reportedly deteriorating in capability and researchers are yet to determine the cause, according to a recent study conducted by Stanford and UC Berkeley.

The recent study demonstrated that newer versions of ChatGPT provided significantly less accurate answers to the same set of questions within a span of a few months, with researchers unable to explain this deterioration in performance.

Researchers Lingjiao Chen, Matei Zaharia and James Zou put ChatGPT-3.5 and ChatGPT-4 models through a series of tasks involving solving math problems, answering sensitive questions, writing new lines of code and conducting spatial reasoning from prompts to gauge the reliability of the different versions.

Highlighting the potential for substantial change in LLM behaviour over relatively short periods, the researchers stressed the importance of continuous monitoring of AI model quality.

They recommend that users and companies relying on LLM services in their workflows implement a form of monitoring analysis to ensure consistent performance.

模型的转变:


研究人员还观察到在涉及种族和性别等敏感问题时,ChatGPT的回答也出现了变化,变得更加简洁避重就轻。

他们注意到模型在处理敏感问题时的方法发生了转变。早期版本在拒绝回答某些敏感问题时会提供详尽的推理说明,但到了六月份的版本,模型只是简单地道歉并拒绝回答。

在测试中,ChatGPT-4的三月版本能够以惊人的97.6%准确率识别质数。

然而,到了六月份,同一模型的准确率急剧下降,仅为2.4%。

此研究是继OpenAI于六月六日宣布计划组建一个专门团队来管理与超级智能AI系统潜在风险相关的消息之后进行的。OpenAI预计这种超级智能AI系统将在未来十年内出现。


Shift in models

ChatGPT's responses to sensitive queries, particularly those relating to ethnicity and gender, also evolved to become more concise and avoidant.

The researchers observed a shift in the models' approach to dealing with sensitive questions.

While earlier versions offered extensive reasoning for refusing to answer certain sensitive queries, the June versions simply issued an apology and refused to respond.

In the tests, the March version of ChatGPT-4 could identify prime numbers with an impressive 97.6% accuracy.

However, by June, the same model's accuracy had sharply declined to a mere 2.4%.

This study follows OpenAI's announcement on June 6 about plans to create a team dedicated to managing potential risks associated with superintelligent AI systems, which the organisation anticipates emerging within the decade.

【文章来源】Proactive Insights;特别声明:以上内容(如有图片或视频亦包括在内)来自网络,已备注来源;本平台仅提供信息和存储服务。

Notice: The content above (including the pictures and videos if any) is uploaded and posted by user of ASKAI, which is a social media platform focused on technology of CHATGPT and only provides information storage services.


路过

雷人

握手

鲜花

鸡蛋

最新评论

相关分类

QQ|手机版|小黑屋|式问社区-科技驱动创新,探索无限可能! ( 浙ICP备2023018861号-1|浙公网安备 33011002017220号 )

GMT+8, 2024-5-21 14:16

Powered by Discuz! X3.5

© 2001-2023 Discuz! Team.

返回顶部