Google FAKES Gemini AI video to pump up its stock price
By bellecarter // 2023-12-13
Mastodon
Parler
Gab
Copy
A demo of Google's Gemini artificial intelligence (AI) model, with 2.1 million views as of this writing, was posted on its official YouTube channel on Dec. 6. The catch, though, is the Big Tech firm admitted that it has "rigged" the video that showcased the new model's capabilities to process and reason with text, images, audio, video and code.
Google admitted in the description of the video titled "Hands-on with Gemini: Interacting with Multimodal AI" that: "For this demo, latency has been reduced, and Gemini outputs have been shortened for brevity." This just means that the model's response time takes much longer than the video showed. Subsequently, as first reported by Bloomberg Opinion, Google confirmed to BBC that it "used still image frames from the footage, and prompting via text." "Our Hands-on with Gemini demo video shows real prompts and outputs from Gemini," said a Google spokesperson. "We made it to showcase the range of Gemini's capabilities and to inspire developers."
In the video, a person asks a series of questions to Google's AI while showing objects on the screen. For example, at one point the demonstrator holds up a rubber duck and asks Gemini if it will float. Initially, it is unsure what material it is made of, but after the person squeezes it -- and remarks this causes a squeaking sound -- the AI correctly identifies the object. However, what appears to happen in the video at first glance is very different from what happened to generate the prompts. The AI was shown a still image of the duck and asked what material it was made of. It was then fed a text prompt explaining that the duck makes a squeaking noise when squeezed, resulting in the correct identification.
In another instance, a person performs a cups and balls routine and the AI was able to determine where it moved to. But again, as the AI was not responding to a video, this was achieved by showing it a series of still images. In its blog post, Google admitted that it told the AI where a ball was underneath three cups, and showed it images that represent cups being swapped.
At one point, the user places down a world map, and asks the AI: "Based on what you see, come up with a game idea... and use emojis." The AI responds by apparently inventing a game called "guess the country," in which it gives clues such as a kangaroo and koala and responds to a correct guess of the user pointing at a country, in this case, Australia. However, according to the blog, the AI did not invent this game at all. Instead, the AI was given the following instructions: "Let's play a game. Think of a country and give me a clue. The clue must be specific enough that there is only one correct country. I will try pointing at the country on a map," the prompt read. The user then gave the AI examples of correct and incorrect answers. After this point, Gemini was able to generate clues and identify whether the user was pointing to the correct country or not from the stills of a map.
Meanwhile, Google shares were pumped upon the release of the Gemini AI video, only to be sold after reports of the fake video started hitting tech blogs overnight, with Bloomberg reporting Friday morning.
MISLEADING: Rigged AI demo receives backlash
X, formerly Twitter, users were not happy with the said video. They argued that they felt misled by the company. Attorney Clint Ehrlich captioned his post: "Google shocked the world with its new AI, 'Gemini.' But it turns out the video was fake: the A.I. cannot do what Google showed. It's my opinion, as a lawyer and computer scientist, that (1) Google lied and (2) it broke the law."
In the same thread, he added a valid inquiry about whether Google broke the law with its fake demo. According to him, under FTC standards, if a disclaimer is necessary to prevent an ad from being misleading, it must appear in the ad and a separate blog post doesn't cut it. "The disclaimers that Google included in the demo video do not tell consumers the full truth," he explained.
Yesterday, Google shocked the world with its new AI, "Gemini."
But it turns out the video was fake: the A.I. *cannot* do what Google showed.
It's my opinion, as a lawyer and computer scientist, that (1) Google lied and (2) it broke the law. ? pic.twitter.com/VDfvcwfQeO
— Clint Ehrlich (@ClintEhrlich) December 7, 2023
Moreover, an engineer named Anton Prokhorov also tweeted: "So, do I understand correctly, this Gemini video was not real-time? So fake it till you make it? Again, American prototype-based scheme, just some video editing to drop some fascinating demo for demo purposes… Multimodal AI isn't a big deal anymore, for now, what could be a big deal, is real-time videostream handling." Another user replied to him saying: "Unfortunately this will persist, however, as long as "lying" like that is not punished, it's only encouraged by bringing in more attention.
For ZeroHedge's Tyler Durden, the bottom line is to maintain skepticism, especially towards claims made by tech companies, showboating their latest and greatest chatbots. "This serves as an important reminder to be cautious about the AI bubble," he concluded. (Related: Google rolls out new generative AI feature that summarizes articles – meaning, you can only see what it allows you to see.)
FutureTech.news has more stories related to Big Tech innovations, be they real or rigged.
All content posted on this site is commentary or opinion and is protected under Free Speech. Citizens.news is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Citizens.news assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published on this site. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.