Final Thursday, Senators Elizabeth Warren and Eric Schmitt launched a invoice geared toward stirring up extra competitors for Pentagon contracts awarded in AI and cloud computing. Amazon, Microsoft, Google, and Oracle presently dominate these contracts. “The way in which that the massive get greater in AI is by sucking up everybody else’s information and utilizing it to coach and develop their very own methods,” Warren advised the Washington Post.
The brand new bill would “require a aggressive award course of” for contracts, which might ban the usage of “no-bid” awards by the Pentagon to corporations for cloud companies or AI basis fashions. (The lawmakers’ transfer got here a day after OpenAI introduced that its expertise can be deployed on the battlefield for the primary time in a partnership with Anduril, finishing a year-long reversal of its coverage towards working with the army.)
Whereas Massive Tech is hit with antitrust investigations—together with the ongoing lawsuit towards Google about its dominance in search, in addition to a brand new investigation opened into Microsoft—regulators are additionally accusing AI corporations of, nicely, simply straight-up mendacity.
On Tuesday, the Federal Commerce Fee took motion towards the smart-camera firm IntelliVision, saying that the corporate makes false claims about its facial recognition expertise. IntelliVision has promoted its AI fashions, that are utilized in each dwelling and business safety digicam methods, as working with out gender or racial bias and being educated on hundreds of thousands of photographs, two claims the FTC says are false. (The corporate couldn’t help the bias declare and the system was educated on solely 100,000 photographs, the FTC says.)
Every week earlier, the FTC made comparable claims of deceit towards the safety big Evolv, which sells AI-powered safety scanning merchandise to stadiums, Okay-12 colleges, and hospitals. Evolv advertises its methods as providing higher safety than easy steel detectors, saying they use AI to precisely display for weapons, knives, and different threats whereas ignoring innocent objects. The FTC alleges that Evolv has inflated its accuracy claims, and that its methods failed in consequential instances, equivalent to a 2022 incident once they didn’t detect a seven-inch knife that was finally used to stab a pupil.
These add to the complaints the FTC made again in September towards quite a lot of AI corporations, together with one which bought a instrument to generate faux product opinions and one promoting “AI lawyer” companies.