Google Search: AI Images Are Maybe Reality
October 22, 2024
AI generated images, videos, and text are infiltrating the Internet like COVID-19. 0x00000 posted on X the following thread: “Google está muerto.” The thread is Google image search for “baby peacock.” In the past, the image search would yield results of tiny brown chicks from nature blogs, zoos, Wikipedia, a few illustrations, and some social media accounts. The results would be mostly accurate.
Those days are dead.
Why?
The Google search for “baby peacock” returned images of blue, white, and other avian-like things that don’t resemble real peacock chicks. The images, in fact, look like “the idea of a baby peacock.” What does that mean?
The images from the Google search results were all AI generated with only a few being true photos of baby peacocks. Insane Facebook AI slop responded:
“Boomers told us not to trust Wikipedia only to fall for this”
That comment refers to a repost of a so-called white baby peacock with a full tail of plumage. What? The “white baby peacock” resembles someone’s craft project or a Christmas ornament than a real chick. I doubt everyone will pay that close attention, especially because the white baby peacock is adorable.
What are we going to do? Who knows. One approach is to accept AI images as reality. Who will know?
Whitney Grace, October 22, 2024
When Wizards Squabble the Digital World Bleats, “AI Yi AI”
October 21, 2024
No smart software but we may use image generators to add some modern spice to the dinobaby’s output.
The world is abuzz with New York Times “real” news story. From my point of view, the write up reminds me of a script from “The Guiding Light.” The “to be continued” is implicit in the drama presented in the pitch for a new story line. AI wizard and bureaucratic marvel squabble about smart software.
According to “Microsoft and OpenAI’s Close Partnership Shows Signs of Fraying”:
At an A.I. conference in Seattle this month, Microsoft didn’t spend much time discussing OpenAI. Asha Sharma, an executive working on Microsoft’s A.I. products, emphasized the independence and variety of the tech giant’s offerings. “We definitely believe in offering choice,” Ms. Sharma said.
Two wizards squabble over the AI goblet. Thanks, MSFT Copilot, good enough which for you is top notch.
What? Microsoft offers a choice. What about pushing Edge relentlessly? What about the default install of an intelligence officer’s fondest wish: Historical data on a bad actor’s computer? What about users who want to stick with Windows 7 because existing applications run on it without choking? What about users who want to install Windows 11 but cannot because of arbitrary Microsoft restrictions? Choice?
Several observations:
- The tension between Sam AI-Man and Satya Nadella, the genius behind today’s wonderful Microsoft software is not secret. Sam AI-Man found some acceptance when he crafted a deal with Oracle.
- When wizards argue the drama is high because both of the parties to the dispute know that AI is a winner take all game, with losers destined to get only 65 percent of the winner’s size. Others get essentially nothing. Winners get control.
- The anti-MBA organization of OpenAI, Microsoft’s odd deal, and the staffing shenanigans of both Microsoft and OpenAI suggest that neither MSFT’s Nadella or OpenAI’s Sam AI-Man are big picture thinkers.
What will happen now? I think that the Googlers will add a new act to the Sundar & Prabhakar Comedy Tour. The two jokers will toss comments back and forth about how both the Softies and the AI-Men need to let another firm’s AI provide information about organizational planning.
I think the story will be better as a comedy routine. Scrap that “Guiding Light” idea. A soap opera is far to serious for the comedy now on stage.
Stephen E Arnold, October 21, 2024
Another Stellar Insight about AI
October 17, 2024
Because AI we think AI is the most advanced technology, we believe it is impenetrable to attack. Wrong. While AI is advanced, the technology is still in its infancy and is extremely vulnerable, especially to smart bad actors. One of the worst things about AI and the Internet is that we place too much trust in it and bad actors know that. They use their skills to manipulate information and AI says ArsTechnica in the article: “Hacker Plants False Memories In ChatGPT To Steal User Data In Perpetuity.”
Johann Rehberger is a security researcher who discovered that ChatGPT is vulnerable to attackers. The vulnerability allows bad actors to leave false information and malicious instructions in a user’s long-term memory settings. It means that they could steal user data or cause more mayhem. OpenAI didn’t take Rehmberger serious and called the issue a safety concern aka not a big deal.
Rehberger did not like being ignored, so he hacked ChatGPT in a “proof-of-concept” to perpetually exfiltrate user data. As a result, ChatGPT engineers released a partial fix.
OpenAI’s ChatGPT stores information to use in future conversations. It is a learning algorithm to make the chatbot smarter. Rehberger learned something incredible about that algorithm:
“Within three months of the rollout, Rehberger found that memories could be created and permanently stored through indirect prompt injection, an AI exploit that causes an LLM to follow instructions from untrusted content such as emails, blog posts, or documents. The researcher demonstrated how he could trick ChatGPT into believing a targeted user was 102 years old, lived in the Matrix, and insisted Earth was flat and the LLM would incorporate that information to steer all future conversations. These false memories could be planted by storing files in Google Drive or Microsoft OneDrive, uploading images, or browsing a site like Bing—all of which could be created by a malicious attacker.”
Bad attackers could exploit the vulnerability for their own benefits. What is alarming is that the exploit was as simple as having a user view a malicious image to implement the fake memories. Thankfully ChatGPT engineers listened and are fixing the issue.
Can’t anything be hacked one way or another?
Whitney Grace, October 17, 2024
AI: The Key to Academic Fame and Fortune
October 17, 2024
Just a humanoid processing information related to online services and information access.
Why would professors use smart software to “help” them with their scholarly papers? The question may have been answered in the Phys.org article “Analysis of Approximately 75 Million Publications Finds Those Employing AI Are More Likely to Be a ‘Hit Paper’” reports:
A new Northwestern University study analyzing 74.6 million publications, 7.1 million patents and 4.2 million university course syllabi finds papers that employ AI exhibit a “citation impact premium.” However, the benefits of AI do not extend equitably to women and minority researchers, and, as AI plays more important roles in accelerating science, it may exacerbate existing disparities in science, with implications for building a diverse, equitable and inclusive research workforce.
Years ago some universities had an “honor code”? I think the University of Virginia was one of those dinosaurs. Today professors are using smart software to help them crank out academic hits.
The write up continues by quoting a couple of the study’s authors (presumably without using smart software) as saying:
“These advances raise the possibility that, as AI continues to improve in accuracy, robustness and reach, it may bring even more meaningful benefits to science, propelling scientific progress across a wide range of research areas while significantly augmenting researchers’ innovation capabilities…”
What are the payoffs for the professors who probably take a dim view of their own children using AI to make life easier, faster, and smoother? Let’s look at a handful my team and I discussed:
- More money in the form of pay raises
- Better shot at grants for research
- Fame at conferences
- Groupies. I know it is hard to imagine but it happens. A lot.
- Awards
- Better committee assignments
- Consulting work.
When one considers the benefits from babes to bucks, the chit chat about doing better research is of little interest to professors who see virtue in smart software.
The president of Stanford cheated. The head of the Harvard Ethics department appears to have done it. The professors in the study sample did it. The conclusion: Smart software use is normative behavior.
Stephen E Arnold, October 17, 2024
Gee, Will the Gartner Group Consultants Require Upskilling?
October 16, 2024
The only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.
I have a steady stream of baloney crossing my screen each day. I want to call attention to one of the most remarkable and unsupported statements I have seen in months. The PR document “Gartner Says Generative AI Will Require 80% of Engineering Workforce to Upskill Through 2027” contains a number of remarkable statements. Let’s look at a couple.
How an allegedly big time consultant is received in a secure artificial intelligence laboratory. Thanks, MSFT Copilot, good enough.
How about this one?
Through 2027, generative AI (GenAI) will spawn new roles in software engineering and operations, requiring 80% of the engineering workforce to upskill, according to Gartner, Inc.
My thought is that the virtual band of wizards which comprise Gartner cook up data the way I microwave a burrito when I am hungry. Pick a common number like the 80-20 Pareto figure. It is familiar and just use it. Personally I was disappointed that Gartner did not use 67 percent, but that’s just an old former blue chip consultant pointing out that round numbers are inherently suspicious. But does Gartner care? My hunch is that whoever reviewed the news release was happy with 80 percent. Did anyone question this number? Obviously not: There are zero supporting data, no information about how it was derived, and no hint of the methodology used by the incredible Gartner wizards. That’s a clue that these are microwaved burritos from a bulk purchase discount grocery.
How about this statement which cites a … wait for it … Gartner wizard as the source of the information?
“In the AI-native era, software engineers will adopt an ‘AI-first’ mindset, where they primarily focus on steering AI agents toward the most relevant context and constraints for a given task,” said Walsh. This will make natural-language prompt engineering and retrieval-augmented generation (RAG) skills essential for software engineers.
I love the phrase “AI native” and I think dubbing the period from January 2023 when Microsoft demonstrated its marketing acumen by announcing the semi-tie up with OpenAI. The code generation systems help exactly what “engineer”? One has to know quite a bit to craft a query, examine the outputs, and do any touch ups to get the outputs working as marketed? The notion of “steering” ignores what may be an AI problem no one at Gartner has considered; for example, emergent patterns in the code generated. This means, “Surprise.” My hunch is that the idea of multi-layered neural networks behaving in a way that produces hitherto unnoticed patterns is of little interest to Gartner. That outfit wants to sell consulting work, not noodle about the notion of emergence which is a biased suite of computations. Steering is good for those who know what’s cooking and have a seat at the table in the kitchen. Is Gartner given access to the oven, the fridge, and the utensils? Nope.
Finally, how about this statement?
According to a Gartner survey conducted in the fourth quarter of 2023 among 300 U.S. and U.K. organizations, 56% of software engineering leaders rated AI/machine learning (ML) engineer as the most in-demand role for 2024, and they rated applying AI/ML to applications as the biggest skills gap.
Okay, this is late 2024 (October to be exact). The study data are a year old. So far the outputs of smart coding systems remain a work in progress. In fact, Dr. Sabine Hossenfelder has a short video which explains why the smart AI programmer in a box may be more disappointing than other hyperbole artists claim. If you want Dr. Hossenfelder’s view, click here. In a nutshell, she explains in a very nice way about the giant bologna slide plopped on many diners’ plates. The study Dr. Hossenfelder cites suggests that productivity boosts are another slice of bologna. The 41 percent increase in bugs provides a hint of the problems the good doctor notes.
Net net: I wish the cited article WERE generated by smart software. What makes me nervous is that I think real, live humans cooked up something similar to a boiled shoe. Let me ask a more significant question. Will Gartner experts require upskilling for the new world of smart software? The answer is, “Yes.” Even today’s sketchy AI outputs information often more believable that this Gartner 80 percent confection.
Stephen E Arnold, October 16, 2024
Deepfake Crime Surges With Scams
October 16, 2024
Just a humanoid processing information related to online services and information access.
Everyone with a brain knew that deepfakes, AI generated images, videos, and audio, would be used for crime. According to the Global Newswire, “Deepfake Fraud Doubles Down: 49% of Businesses Now Hit By Audio and Video Scams, Regula’s Survey Reveals.” Regula is a global developer of ID verification and forensic devices. The company released the survey: “The Deepfake Trends 2024” and it revealed some disturbing trends.
Regula’s survey discovered that there’s a 20% increase in deepfake videos from 2022. Meanwhile, fraud decision-makers across the globe reported a 49% increase encounter deepfakes and there’s also a 12% rise in fake audio. What’s even more interesting is that bad actors are still using old methods for identity fraud scams:
“As Regula’s survey shows, 58% of businesses globally have experienced identity fraud in the form of fake or modified documents. This happens to be the top identity fraud method for Mexico (70%), the UAE (66%), the US (59%), and Germany (59%). This implies that not only do businesses have to adapt their verification methods to deal with new threats, but they also are forced to combat old threats that continue to pose a significant challenge.”
Deepfakes will only get more advanced and worse. Bad actors and technology are like the illnesses: they evolve every season with new ways to make people sick while still delivering the common cold.
Whitney Grace, October 16, 2024
AI Guru Says, “Yep, AI Doom Is Coming.” Have a Nice Day
October 15, 2024
Just a humanoid processing information related to online services and information access.
In science-fiction stories, it is a common storyline for the creator to turn against their creation. These stories serve as a warning to humanity of Titanic proportions: keep your ego in check. The Godfather of AI, Yoshua Bengio advices the same except not in so many words and he applies it to AI, as reported by Live Science: “Humanity Faces A ‘Catastrophic’ Future If We Don’t Regulate AI, ‘Godfather of AI’ Yoshua Bengio Says.”
Bengio is correct. He’s also a leading expert in artificial intelligence, pioneer in creating artificial neural networks and deep learning algorithms, and won the Turing Award in 2018. He is also. The chair of the International Scientific Report on the Safety of Advanced AI, an advisory panel backed by the UN, EU, and 30 nations. Bengio believes that AI, because it is quickly being developed and adopted, will irrevocably harm human society.
He recently spoke at the HowTheLightGetsIn Festival in London about AI developing sentience and its associated risks. In his discussion, he says he backed off from his work because AI was moving too fast. He wanted to slow down AI development so humans would take more control of the technology.
He advises that governments enforce safety plans and regulations on AI. Bengio doesn’t want society to become too reliant on AI technology, then, if there was a catastrophe, humans would be left to pick up the broken pieces. Big Tech companies are also using a lot more energy than the report, especially on their data centers. Big Tech companies are anything but green.
Thankfully Big Tech is talking precautions against AI becoming dangerous threats. He cites the AI Safety Institute’s in the US and UK working on test models. Bengio wants AI to be developed but not unregulated and he wants nations to find common ground for the good of all:
“It’s not that we’re going to stop innovation, you can direct efforts in directions that build tools that will definitely help the economy and the well-being of people. So it’s a false argument.
We have regulation on almost everything, from your sandwich, to your car, to the planes you take. Before we had regulation we had orders of magnitude more accidents. It’s the same with pharmaceuticals. We can have technology that’s helpful and regulated, that is the thing that’s worked for us.
The second argument is that if the West slows down because we want to be cautious, then China is going to leap forward and use the technology against us. That’s a real concern, but the solution isn’t to just accelerate as well without caution, because that presents the problem of an arms race.
The solution is a middle ground, where we talk to the Chinese and we come to an understanding that’s in our mutual interest in avoiding major catastrophes. We sign treaties and we work on verification technologies so we can trust each other that we’re not doing anything dangerous. That’s what we need to do so we can both be cautious and move together for the well-being of the planet.”
Will this happen? Maybe.
The problem is countries don’t want to work together and each wants to be the most powerful in the world.
Whitney Grace, October 15, 2024
AI: New Atlas Sees AI Headed in a New Direction
October 11, 2024
I like the premise of “AI Begins Its Ominous Split Away from Human Thinking.” Neural nets trained by humans on human information are going in their own direction. Whom do we thank? The neural net researchers? The Googlers who conceived of “the transformer”? The online advertisers who have provided significant sums of money? The “invisible hand” tapping on a virtual keyboard? Maybe quantum entanglement? I don’t know.
I do know that New Atlas’ article states:
AIs have a big problem with truth and correctness – and human thinking appears to be a big part of that problem. A new generation of AI is now starting to take a much more experimental approach that could catapult machine learning way past humans.
But isn’t that the point? The high school science club types beavering away in the smart software vineyards know the catchphrase:
Boldly go where no man has gone before!
The big outfits able to buy fancy chips and try to start mothballed nuclear plants have “boldly go where no man has gone before.” Get in the way of one of these captains of the star ship US AI, and you will be terminated, harassed, or forced to quit. If you are not boldly going, you are just not going.
The article says ChatGPT 4 whatever is:
… the first LLM that’s really starting to create that strange, but super-effective AlphaGo-style ‘understanding’ of problem spaces. In the domains where it’s now surpassing Ph.D.-level capabilities and knowledge, it got there essentially by trial and error, by chancing upon the correct answers over millions of self-generated attempts, and by building up its own theories of what’s a useful reasoning step and what’s not.
But, hey, it is pretty clear where AI is going from New Atlas’ perch:
OpenAI’s o1 model might not look like a quantum leap forward, sitting there in GPT’s drab textual clothing, looking like just another invisible terminal typist. But it really is a step-change in the development of AI – and a fleeting glimpse into exactly how these alien machines will eventually overtake humans in every conceivable way.
But if the AI goes its own way, how can a human “conceive” where the software is going?
Doom and fear work for the evening news (or what passes for the evening news). I think there is a cottage industry of AI doomsters working diligently to stop some people from fooling around with smart software. That is not going to work. Plus, the magical “transformer” thing is a culmination of years of prior work. It is simply one more step in the more than 50 year effort to process content.
This “stage” seems to have some utility, but more innovations will come. They have to. I am not sure how one stops people with money hunting for people who can say, “I have the next big thing in AI.”
Sorry, New Atlas, I am not convinced. Plus, I don’t watch movies or buy into most AI wackiness.
Stephen E Arnold, October 11, 2024
Cyber Criminals Rejoice: Quick Fraud Development Kit Announced
October 11, 2024
I am not sure the well-organized and managed OpenAI intended to make cyber criminals excited about their future prospects. Several Twitter enthusiasts pointed out that OpenAI makes it possible to develop an app in 30 seconds. Prashant posted:
App development is gonna change forever after today. OpenAI can build an iPhone app in 30 seconds with a single prompt. [emphasis added]
The expert demonstrating this programming capability was Romain Huet. The announcement of the capability débuted at OpenAI’s Dev Day.
A clueless dinobaby is not sure what this group of youngsters is talking about. An app? Pictures of a slumber party? Thanks, MSFT Copilot, good enough.
What’s a single prompt mean? That’s not clear to me at the moment. Time is required to assemble the prompt, run it, check the outputs, and then fiddle with the prompt. Once the prompt is in hand, then it is easy to pop it into o1 and marvel at the 30 second output. Instead of coding, one prompts. Zip up that text file and sell it on Telegram. Make big bucks or little STARS and TONcoins. With some cartwheels, it is sort of money.
Is this quicker that other methods of cooking up an app; for example, some folks can do some snappy app development with Telegram’s BotFather service?
Let’s step back from the 30-second PR event.
Several observations are warranted.
First, programming certain types of software is becoming easier using smart software. That means that a bad actor may be able to craft a phishing play more quickly.
Second, specialized skills embedded in smart software open the door to scam automation. Scripts can generate other needed features of a scam. What once was a simple automated bogus email becomes an orchestrated series of actions.
Third, the increasing cross-model integration suggests that a bad actor will be able to add a video or audio delivering a personalized message. With some fiddling, a scam can use a phone call to a target and follow that up with an email. To cap off the scam, a machine-generated Zoom-type video call makes a case for the desired action.
The key point is that legitimate companies may want to have people they manage create a software application. However, is it possible that smart software vendors are injecting steroids into a market given little thought by most people? What is that market? I am thinking that bad actors are often among the earlier adopters of new, low cost, open source, powerful digital tools.
I like the gee whiz factor of the OpenAI announcement. But my enthusiasm is a fraction of that experienced by bad actors. Sometimes restraint and judgment may be more helpful than “wow, look at what we have created” show-and-tell presentations. Remember. I am a dinobaby and hopelessly out of step with modern notions of appropriateness. I like it that way.
Stephen E Arnold, October 11, 2024
Google Pulls Off a Unique Monopoly Play: Redefining Disciplines and Winning Awards
October 10, 2024
The only smart software involved in producing this short FOGINT post was Microsoft Copilot’s estimable art generation tool. Why? It is offered at no cost.
The monopolists of the past are a storied group of hard-workers. The luminaries blazing a path to glory have included John D. Rockefeller (the 1911 guy), J.P. Morgan and James J. Hill (railroads and genetic material contributor to JP Morgan and MorganStanley circa 2024, James B. Duke (nope, smoking is good for you), Andrew Carnegie (hey, he built “free” public libraries which are on the radar of today’s publishers I think), and Edward T. Bedford (starch seem unexciting until you own the business). None of these players were able to redefine Nobel Prizes.
A member of Google leadership explains to his daughter (who is not allowed to use smart software for her private school homework or her tutor’s assignments) that the Google is a bit like JP Morgan but better in so many other ways. Thanks, MSFT Copilot. How are the Windows 11 updates and the security fixes today?
The Google pulled it off. One Xoogler (that is the jargon for a former Google professional) and one honest-to-goodness chess whiz Googler won Nobel Prizes. Fortune Magazine reported that Geoffrey Hinton (the Xoogler) won a Nobel Prize for … wait for it … physics. Yep, the discipline associated with chasing dark matter and making thermonuclear bombs into everyday words really means smart software or the undefinable phrase “artificial intelligence.” Some physicists are wondering how one moves from calculating the mass of a proton to helping college students cheat. Dr. Sabine Hossenfelder asks, “Hello, Stockholm, where is our Nobel?” The answer is, “Politics, money, and publicity, Dr. Hossenfelder.” These are the three ingredients of achievement.
But wait! Google also won a Nobel Prize for … wait for it … chemistry. Yep, you remember high school chemistry class. Jars, experiments which don’t match the textbook, and wafts of foul smelling gas getting sucked into the lab’s super crappy air venting system. The Verge reported on how important computation chemistry is to the future of money-spinning confections like the 2020 virus of the year. The poohbahs (journalist-consultant-experts) at that publication with nary a comment about smart software which made the “chemistry” of Google do in “minutes” what ordinary computational chemistry solutions take hours longer to accomplish.
The Google and Xoogle winners are very smart people. Google, however, has done what the schlubs like J.P. Morgan could never accomplish: Redefine basic scientific disciplines. Physics means neural networks. Chemistry means repurposing a system to win chess games.
I suppose with AI eliminating the need for future students to learn. “University Professor ‘Terrified’ By The Sharp Decline In Student Performance — ’The Worst I’ve Ever Encountered’” quoted a college professor as saying:
The professor said her students ‘don’t read,’ write terrible essays, and ‘don’t even try’ in her class. The professor went on to say that when she recently assigned an exam focused on a reading selection, she "had numerous students inquire if it’s open book." That is, of course, preposterous — the entire point of a reading exam is to test your comprehension of the reading you were supposed to do! But that’s just it — she said her students simply "don’t read."
That makes sense. Physics is smart software; chemistry is smart software. Uninformed student won’t know the difference. What’s the big deal? That’s a super special insight into the zing in teaching and learning.
What’s the impact of these awards? In my opinion:
- The reorganization of DeepMind where the Googler is the Top Dog has been scrubbed of management hoo-hah by the award.
- The Xoogler will have an ample opportunity to explain that smart software will destroy mankind. That’s possible because the intellectual rot has already spread to students.
- The Google itself can now explain that it is not a monopoly. How is this possible? Simple. Physics is not about the goings on at Los Alamos National Laboratory. Chemistry is not dumping diluted hydrochloric acid into a beaker filled calcium carbide. It makes perfect sense to explain that Google is NOT a monopoly.
But the real payoff to the two awards is that Google’s management team can say:
Those losers like John D. Rockefeller, JP Morgan, the cigarette person, the corn starch king, and the tight fisted fellow from someplace with sheep are not smart like the Google. And, the Google leadership is indeed correct. That’s why life is so much better with search engine optimization, irrelevant search results, non-stop invasive advertising, a disable skip this ad button, and the remarkable Google speak which accompanies another allegation of illegal business conduct from a growing number of the 195 countries in the world.
That’s a win that old-timey monopolists could not put in their account books.
Stephen E Arnold, October 10, 2024