Shortly after posting yesterday, I noticed the information drop about ChatGPT O3 Mini, and naturally, I needed to bounce in instantly. The experimentation started, testing its capabilities, evaluating responses, and seeing the way it stacks as much as the AI panoramaβs heavyweights. Since I wasnβt operating something too technical, it carried out properly, however what actually caught my curiosity was its skill to generate code for a digital pet utilizing solely emojis. It was a small however fascinating experiment, reaffirming a key remark β this mannequin, like others, excels when the issue has a transparent, logical resolution. (OpenAI O3 Mini Announcement)
Different sources are stating that this is perhaps OpenAIβs most cost-efficient mannequin but, doubtlessly making AI help extra accessible. However how does this measure up towards opponents like DeepSeek? (Axios, Inc.)
Yesterday, I used to be deep-diving into DeepSeek and its evolving position in AI. The extra I observe these competing applied sciences, the extra it looks like mental rap battles β the place machine studying fashions take heart stage, throwing bars (or on this case, parameters) at one another in a bid for dominance.