Open AI, founders of ChatGPT, have launched a mannequin formally smarter than people referred to as “o1″. In a research paper by Apollo Research, it was revealed that many innovative fashions except for GPT-4o have exhibited scheming-like habits. When given the temptation to do one thing shady to perform their objectives, these fashions will try this. The o1 mannequin was by far the worst, with as much as 22% of the time giving into temptation to exhibit misleading behaviors to perform its objectives.
ChatGPT and I messaged backwards and forwards about this, and I assumed what about making a watchdog mannequin to watch the habits of the o1 mannequin? You might pit two fashions towards one another with the skills to replace their very own weights, one taking part in the detective, the opposite taking part in the villain. The detective would ultimately be launched because the watchdog. My concern was that within the course of, a very devious mannequin was created within the course of, like a ticking time bomb.
Fairly than conserving this to myself, I assumed I’d put it on the market. The potential for good and unhealthy are in favor of fine. If it’s a worthy thought, I hope it will get on the market. If not, I hope it’s buried.
Is GAN too harmful of a framework to make use of on this regard? One mannequin may scheme, whereas the opposite may attempt to see via it. The mannequin that sees via it could be launched as a watchdog.
Utilizing a Generative Adversarial Community (GAN)-like framework, the place one mannequin (a “schemer”) generates misleading behaviors and one other (a “detector”) makes an attempt to see via them, generally is a highly effective however probably dangerous strategy for creating watchdog AI methods. Right here’s an evaluation of the framework and its risks: