Ignoring the Big Thing: Google and Its PR Hunger

December 18, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “FunSearch: Making New Discoveries in Mathematical Sciences Using Large Language Models.” The main idea is that Google’s smart software is — once again — going where no mortal man has gone before. The write up states:

Today, in a paper published in Nature, we introduce FunSearch, a method to search for new solutions in mathematics and computer science. FunSearch works by pairing a pre-trained LLM, whose goal is to provide creative solutions in the form of computer code, with an automated “evaluator”, which guards against hallucinations and incorrect ideas. By iterating back-and-forth between these two components, initial solutions “evolve” into new knowledge. The system searches for “functions” written in computer code; hence the name FunSearch.

I like the idea of getting the write up in Nature, a respected journal. I like even better the idea of Google-splaining how a large language model can do mathy things. I absolutely love the idea of “new.”

image

“What’s with the pointed stick? I needed a wheel,” says the disappointed user of an advanced technology in days of yore. Thanks, MSFT Copilot. Good enough, which is a standard of excellence in smart software in my opinion.

Here’s a wonderful observation summing up Google’s latest development in smart software:

FunSearch is like one of those rocket cars that people make once in a while to break land speed records. Extremely expensive, extremely impractical and terminally over-specialized to do one thing, and do that thing only. And, ultimately, a bit of a show. YeGoblynQueenne via YCombinator.

My question is, “Is Google dusting a code brute force method with marketing sprinkles?” I assume that the approach can be enhanced with more tuning of the evaluator. I am not silly enough to ask if Google will explain the settings, threshold knobs, and probability levers operating behind the scenes.

Google’s prose makes the achievement clear:

This work represents the first time a new discovery has been made for challenging open problems in science or mathematics using LLMs. FunSearch discovered new solutions for the cap set problem, a longstanding open problem in mathematics. In addition, to demonstrate the practical usefulness of FunSearch, we used it to discover more effective algorithms for the “bin-packing” problem, which has ubiquitous applications such as making data centers more efficient.

The search for more effective algorithms is a never-ending quest. Who bothers to learn how to get a printer to spit out “Hello, World”? Today I am pleased if my printer outputs a Gmail message. And bin-packing is now solved. Good.

As I read the blog post, I found the focus on large language models interesting. But that evaluator strikes me as something of considerable interest. When smart software discovers something new, who or what allows the evaluator to “know” that something “new” is emerging. That evaluator must be something to prevent hallucination (a fancy term for making stuff up) and blocking the innovation process. I won’t raise any Philosophy 101 questions, but I will say, “Google has the keys to the universe” with sprinkles too.

There’s a picture too. But where’s the evaluator. Simplification is one thing, but skipping over the system and method that prevents smart software hallucinations (falsehoods, mistakes, and craziness) is quite another.

Google is not a company to shy from innovation from its human wizards. If one thinks about the thrust of the blog post, will these Googlers be needed. Google’s innovativeness has drifted toward me-too behavior and being clever with advertising.

The blog post concludes:

FunSearch demonstrates that if we safeguard against LLMs’ hallucinations, the power of these models can be harnessed not only to produce new mathematical discoveries, but also to reveal potentially impactful solutions to important real-world problems.

I agree. But the “how” hangs above the marketing. But when a company has quantum supremacy, the grimness of the recent court loss, and assorted legal hassles — what is this magical evaluator?

I find Google’s deal to use facial recognition to assist the UK in enforcing what appears to be “stop porn” regulations more in line with what Google’s smart software can do. The “new” math? Eh, maybe. But analyzing every person trying to access a porn site and having the technical infrastructure to perform cross correlation. Now that’s something that will be of interest to governments and commercial customers.

The bin thing and a short cut for a python script. Interesting but it lacks the practical “big bucks now” potential of the facial recognition play. That, as far as I know, was not written up and ponied around to prestigious journals. To me, that was news, not the FUN as a cute reminder of a “function” search.

Stephen E Arnold, December 18, 2023

Google and Its Age Verification System: Will There Be a FAES Off?

December 18, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Just in time for the holidays! Google’s user age verification system is ready for 2024. “Google Develops Selfie Scanning Software Ahead of Porn Crackdown” reports:

Google has developed face-scanning technology that would block children from accessing adult websites ahead of a crackdown on online porn. An artificial intelligence system developed by the web giant for estimating a person’s age based on their face has quietly been approved in the UK.

image

Thanks, MSFT Copilot. A good enough eyeball with a mobile phone, a pencil, a valise, stealthy sneakers, and data.

Facial recognition, although widely used in some countries, continues to make some people nervous. But in the UKL, the Google method will allow the UK government to obtain data to verify one’s age. The objective is to stop those who are younger than 18 from viewing “adult Web sites.”

The story reveals:

[Google] says the technology is 99.9pc reliable in identifying that a photo of an 18-year-old is under the age of 25. If users are believed to be under the age of 25, they could be asked to provide additional ID.

The phrase used to describe the approach is “face age estimation system.”

The cited newspaper article points out:

It is unclear what Google plans to use the system for. It could use it within its own services, such as YouTube and the Google Play app download store, or build it into its Chrome web browser to allow websites to verify that visitors are over 18.

Google is not the only outfit using facial recognition to allegedly reduce harm to individuals. Facebook and OnlyFans, according to the write up are already deploying similar technology.

The news story says:

It is unclear what privacy protections Google would apply to the system.

I wonder what interesting insights would be available if data from the FAES were cross cross correlated with other information. That might have value to advertisers and possibly other commercial or governmental entities.

Stephen E Arnold, December 18, 2023

FTC Enacts Investigative Process On AI Products and Services

December 15, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Creative types and educational professionals are worried about the influence of AI-generated work. However, law, legal, finance, business operations, and other industries are worried about how AI will impact them. Aware about the upward trend in goods and services that are surreptitiously moving into the market, the Federal Trade Commission (FTC) took action. The FTC released a briefing on the new consumer AI protection: “FTC Authorities Compulsory Process For AI-Related Products And Services.”

image

The executive recruiter for a government contractor says, “You can earn great money with a side gig helping your government validate AI algorithms. Does that sound good?” Will American schools produce enough AI savvy people to validate opaque and black box algorithms? Thanks, MSFT Copilot. You hallucinated on this one, but your image was good enough.

The FTC passed an omnibus resolution that authorizes a compulsory process in nonpublic investigations about products and services that use or claim to be made with AI or claim to detect it. The new omnibus resolution will increase the FTC’s efficiency with civil investigation demands (CIDs), a compulsory process like a subpoena. CIDs are issued to collect information, similar to legal discovery, for consumer protection and competition investigations. The new resolution will be in effect for ten years and the FTC voted to approve it 3-0.

The FTC defines AI as:

“AI includes, but is not limited to, machine-based systems that can, for a set of defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Generative AI can be used to generate synthetic content including images, videos, audio, text, and other digital content that appear to be created by humans. Many companies now offer products and services using AI and generative AI, while others offer products and services that claim to detect content made by generative AI.”

AI can also be used for deception, privacy infringements, fraud, and other illegal activities. AI can causes competition problems, such as if a few companies monopolize algorithms are other AI-related technologies.

The FTC is taking preliminary steps to protect consumers from bad actors and their nefarious AI-generated deeds. However, what constitutes a violation in relation to AI? Will the data training libraries be examined along with the developers? Where will the expert analysts come? An online university training program?

Whitney Grace, December 15, 2023

Why Is a Generative System Lazy? Maybe Money and Lousy Engineering

December 13, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Great post on the Xhitter. From @ChatGPT app:

we’ve heard all your feedback about GPT4 getting lazier! we haven’t updated the model since Nov 11th, and this certainly isn’t intentional. model behavior can be unpredictable, and we’re looking into fixing it

My experience with Chat GPT is that it responds like an intern working with my team between the freshman and sophomore years at college. Most of the information output is based on a “least effort” algorithm; that is, the shortest distance between A and B is vague promises.

image

An engineer at a “smart” software company leaps into action. Thanks, MSFT Copilot. Does this cartoon look like any of your technical team?

When I read about “unpredictable”, I wonder if people realize that probabilistic systems are wrong a certain percentage of the time or outputs. The horse loses the race. Okay, a fact. The bet on that horse is a different part of the stall.

But the “lazier” comment evokes several thoughts in my dinobaby mind:

  1. Allocate less time per prompt to reduce the bottlenecks in a computationally expensive system; thus, laziness is signal about crappy engineering
  2. Recognize that recycling results for frequent queries is a great way to give a user “something” close enough for horseshoes. If the user is clever, that user will use words like “give me more” or some similar rah rah to trigger another pass through what’s available
  3. The costs of system are so great, the Sam AI-Man system is starved for cash for engineers, hardware, bandwidth, and computational capacity. Until there’s more dough, the pantry will be poorly stocked.

Net net: Lazy may be a synonym for more serious issues. How does one make AI perform? Fabrication and marketing seem to be useful.

Stephen E Arnold, December 13, 2023

Redefining Elite in the Age of AI: Nope, Redefining Average Is the News Story

December 12, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Business Insider has come up with an interesting swizzle on the AI thirst fest. “AI Is the Great Equalizer.” The subtitle is quite suggestive about a technology which is over 50 years in the making and just one year into its razzle dazzle next big thing with the OpenAI generative pre-trained transformer.

image

The teacher (the person with the tie) is not quite as enthusiastic about Billy, Kristie, and Mary. The teacher knows that each is a budding Einstein, a modern day Gertrude Stein, or an Ada Lovelace in the eyes of the parent. The reality is that big-time performers are a tiny percentage of any given cohort. One blue chip consulting firm complained that it had to interview 1,000 people to identify a person who could contribute. That was self-congratulatory like Oscar Meyer slapping the Cinco Jota label on a pack of baloney. But the perceptions about the impact of a rapidly developing technology on average performers is are interesting but their validity is unknown. Thanks, MSFT Copilot, you have the parental pride angle down pat. What inspired you? A microchip?

In my opinion, the main idea in the essay is:

Education and expertise won’t count for as much as they used to.

Does this mean the falling scores for reading and math are a good thing? Just let one of the techno giants do the thinking: Is that the message.

I loved this statement about working in law firms. In my experience, the assertion applies to consulting firms as well. There is only one minor problem, which I will mention after you scan the quote:

This is something the law-school study touches on. “The legal profession has a well-known bimodal separation between ‘elite’ and ‘nonelite’ lawyers in pay and career opportunities,” the authors write. “By helping to bring up the bottom (and even potentially bring down the top), AI tools could be a significant force for equality in the practice of law.”

The write up points out that AI won’t have much of an impact on the “elite”; that is, the individuals who can think, innovate, and make stuff happen. The write up says about company hiring strategies contacted about the impact of AI:

They [These firms’ executives] are aiming to hire fewer entry-level people straight out of school, since AI can increasingly take on the straightforward, well-defined tasks these younger workers have traditionally performed. They plan to bulk up on experts who can ace the complicated stuff that’s still too hard for machines to perform.

The write up in interesting, but it is speculative, not what’s happening.

Here’s what we know about the ChatGPT-type revolution after one year:

  1. Cyber criminals have figured out how to use generative tools to increase the amount of cyber crime requiring sentences or script generation. Score one for the bad actors.
  2. Older people are either reluctant or fearful of fooling around with what appears to be “magical” software. Therefore, the uptake at work is likely to be slower and probably more cautious than for some who are younger at heart. Score one for Luddites and automation-related protests.
  3. The younger folk will use any online service that makes something easier or more convenient. Want to buy contraband? Hit those Telegram-type groups. Want to write a report about a new procedure? Hey, let a ChatGPT-type system do it? Worry about its accuracy or appropriateness? Nope, not too much.

Net net: Change is happening, but the use of smart outputs by people who cannot read, do math, or think about Kant’s ideas are unlikely to do much more than add friction to an already creaky bureaucratic machine. As for the future, I don’t know. This dinobaby is not fearful of admitting it.

As for lawyers, remember what Shakespeare said:

“The first thing we do is, let’s kill all the lawyers.”

The statement by Dick the Butcher may apply to quite a few in “knowledge” professions. Including some essayists like this dinobaby and many, many others. The rationale is to just keep the smartest ones. AI is good enough for everything else.

Stephen E Arnold, December 12, 2023

Problematic Smart Algorithms

December 12, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

We already know that AI is fundamentally biased if it is trained with bad or polluted data models. Most of these biases are unintentional due ignorance on the part of the developers, I.e. lack diversity or vetted information. In order to improve the quality of AI, developers are relying on educated humans to help shape the data models. Not all of the AI projects are looking to fix their polluted data and ZD Net says it’s going to be a huge problem: “Algorithms Soon Will Run Your Life-And Ruin It, If Trained Incorrectly.”

Our lives are saturated with technology that has incorporated AI. Everything from an application used on a smartphone to a digital assistant like Alexa or Siri uses AI. The article tells us about another type of biased data and it’s due to an ironic problem. The science team of Aparna Balagopalan, David Madras, David H. Yang, Dylan Hadfield-Menell, Gillian Hadfield, and Marzyeh Ghassemi worked worked on an AI project that studied how AI algorithms justified their predictions. The data model contained information from human respondents who provided different responses when asked to give descriptive or normative labels for data.

Normative data concentrates on hard facts while descriptive data focuses on value judgements. The team noticed the pattern so they conducted another experiment with four data sets to test different policies. The study asked the respondents to judge an apartment complex’s policy about aggressive dogs against images of canines with normative or descriptive tags. The results were astounding and scary:

"The descriptive labelers were asked to decide whether certain factual features were present or not – such as whether the dog was aggressive or unkempt. If the answer was "yes," then the rule was essentially violated — but the participants had no idea that this rule existed when weighing in and therefore weren’t aware that their answer would eject a hapless canine from the apartment.

Meanwhile, another group of normative labelers were told about the policy prohibiting aggressive dogs, and then asked to stand judgment on each image.

It turns out that humans are far less likely to label an object as a violation when aware of a rule and much more likely to register a dog as aggressive (albeit unknowingly ) when asked to label things descriptively.

The difference wasn’t by a small margin either. Descriptive labelers (those who didn’t know the apartment rule but were asked to weigh in on aggressiveness) had unwittingly condemned 20% more dogs to doggy jail than those who were asked if the same image of the pooch broke the apartment rule or not.”

The conclusion is that AI developers need to spread the word about this problem and find solutions. This could be another fear mongering tactic like the Y2K implosion. What happened with that? Nothing. Yes, this is a problem but it will probably be solved before society meets its end.

Whitney Grace, December 12, 2023

Did AI Say, Smile and Pay Despite Bankruptcy

December 11, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Going out of business is a painful event for [a] the whiz kids who dreamed up an idea guaranteed to baffle grandma, [b] the friends, family, and venture capitalists who funded the sure-fire next Google, and [c] the “customers” or more accurately the “users” who gave the product or service a whirl and some cash.

Therefore, one who had taken an entry level philosophy class when a sophomore might have brushed against the thorny bush of ethics. Some get scratched, emulate the folks who wore chains and sharpened nails under their Grieve St Laurent robes, and read medieval wisdom literature for fun. Others just dump that baloney and focus on figuring out how to exit Dodge City without a posse riding hard after them.

image

The young woman learns that the creditors of an insolvent firm may “sell” her account to companies which operate on a “pay or else” policy. Imagine. You have lousy teeth and you could be put in jail. Look at the bright side. In some nation states, prison medical services include dental work. Anesthetic? Yeah. Maybe not so much. Thanks, MSFT Copilot. You had a bit of a hiccup this morning, but you spit out a tooth with an image on it. Close enough.

I read “Smile Direct Club shuts down after Filing for Bankruptcy – What It Means for Customers.” With AI customer service solutions available, one would think that a zoom zoom semi-high tech outfit would find a way to handle issues in an elegant way. Wait! Maybe the company did, and this article documents how smart software may influence certain business decisions.

The story is simple. Smile Direct could not make its mail order dental business payoff. The cited news story presents what might be a glimpse of the AI future. I quote:

Smile Direct Club has also revealed its "lifetime smile guarantee" it previously offered was no longer valid, while those with payment plans set up are expected to continue making payments. The company has not yet revealed how customers can get refunds.

I like the idea that a “lifetime” is vague; therefore, once the company dies, the user is dead too. I enjoyed immensely the alleged expectation that customers who are using the mail order dental service — even though it is defunct and not delivering its “product” — will have to keep making payments. I assume that the friendly folks at online payment services and our friends at the big credit card companies will just keep doing the automatic billing. (Those payment institutions have super duper customer service systems in my experience. Yours, of course, may differ from mine.

I am looking forward to straightening out this story. (You know. Dental braces. Straightening teeth via mail order. High tech. The next Google. Yada yada.)

Stephen E Arnold, December 11, 2023

Constraints Make AI More Human. Who Would Have Guessed?

December 11, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

AI developers could be one step closer at artificially recreating the human brain. Science Daily discusses a study from the University of Cambridge about, “AI System Self-Organizes To Develop Features of Brains Of Complex Organisms.” Neural systems are designed to organize, form connections, and balance an organism’s competing demands. They need energy and resources to grow an organism’s physical body, while they also optimize neural activity for information processing. This natural balance describes how animal brains have similar organizational solutions.

Brains are designed to solve and understand complex problems while exerting as little energy as possible. Biological systems usually evolve to maximize energy resources available to them.

image

“See how much better the output is when we constrain the smart software,” says the young keyboard operator. Thanks, MSFT Copilot. Good enough.

Scientists from the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) at the University of Cambridge experimented with this concept when they made a simplified brain model and applied physical constraints. The model developed traits similar to human brains.

The scientists tested the model brain system by having it navigate a maze. Maze navigation was chosen because it requires various tasks to be completed. The different tasks activate different nodes in the model. Nodes are similar to brain neurons. The brain model needed to practice navigating the maze:

“Initially, the system does not know how to complete the task and makes mistakes. But when it is given feedback it gradually learns to get better at the task. It learns by changing the strength of the connections between its nodes, similar to how the strength of connections between brain cells changes as we learn. The system then repeats the task over and over again, until eventually it learns to perform it correctly.

With their system, however, the physical constraint meant that the further away two nodes were, the more difficult it was to build a connection between the two nodes in response to the feedback. In the human brain, connections that span a large physical distance are expensive to form and maintain.”

The physical constraints on the model forced its nodes to react and adapt similarly to a human brain. The implications for AI are that it could make algorithms process faster and more complex tasks as well as advance the evolution of “robot” brains.

Whitney Grace, December 11, 2023

Weaponizing AI Information for Rubes with Googley Fakes

December 8, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

From the “Hey, rube” department: “Google Admits That a Gemini AI Demo Video Was Staged” reports as actual factual:

There was no voice interaction, nor was the demo happening in real time.

image

Young Star Wars’ fans learn the truth behind the scenes which thrill them. Thanks, MSFT Copilot. One try and some work with the speech bubble and I was good to go.

And to what magical event does this mysterious statement refer? The Google Gemini announcement. Yep, 16 Hollywood style videos of “reality.” Engadget asserts:

Google is counting on its very own GPT-4 competitor, Gemini, so much that it staged parts of a recent demo video. In an opinion piece, Bloomberg says Google admits that for its video titled “Hands-on with Gemini: Interacting with multimodal AI,” not only was it edited to speed up the outputs (which was declared in the video description), but the implied voice interaction between the human user and the AI was actually non-existent.

The article makes what I think is a rather gentle statement:

This is far less impressive than the video wants to mislead us into thinking, and worse yet, the lack of disclaimer about the actual input method makes Gemini’s readiness rather questionable.

Hopefully sometime in the near future Googlers can make reality from Hollywood-type fantasies. After all, policeware vendors have been trying to deliver a Minority Report-type of investigative experience for a heck of a lot longer.

What’s the most interesting part of the Google AI achievement? I think it illuminates the thinking of those who live in an ethical galaxy far, far away… if true, of course. Of course. I wonder if the same “fake it til you make it” approach applies to other Google activities?

Stephen E Arnold, December 8, 2023

Google Smart Software Titbits: Post Gemini Edition

December 8, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In the Apple-inspired roll out of Google Gemini, the excitement is palpable. Is your heart palpitating? Ah, no. Neither is mine. Nevertheless, in the aftershock of a blockbuster “me to” the knowledge shrapnel has peppered my dinobaby lair; to wit: Gemini, according to Wired, is a “new breed” of AI. The source? Google’s Demis Hassabis.

image

What happens when the marketing does not align with the user experience? Tell the hardware wizards to shift into high gear, of course. Then tell the marketing professionals to evolve the story. Thanks, MSFT Copilot. You know I think you enjoyed generating this image.

Navigate to “Google Confirms That Its Cofounder Sergey Brin Played a Key Role in Creating Its ChatGPT Rival.” That’s a clickable headline. The write up asserts: “Google hinted that its cofounder Sergey Brin played a key role in the tech giant’s AI push.”

Interesting. One person involved in both Google and OpenAI. And Google responding to OpenAI after one year? Management brilliance or another high school science club method? The right information at the right time is nine-tenths of any battle. Was Google not processing information? Was the information it received about OpenAI incorrect or weaponized? Now Gemini is a “new breed” of AI. The Verge reports that McDonald’s burger joints will use Google AI to “make sure your fries are fresh.”

Google has been busy in non-AI areas; for instance:

  • The Register asserts that a US senator claims Google and Apple reveal push notification data to non-US nation states
  • Google has ramped up its donations to universities, according to TechMeme
  • Lost files you thought were in Google Drive? Never fear. Google has a software tool you can use to fix your problem. Well, that’s what Engadget says.

So an AI problem? What problem?

Stephen E Arnold, December 8, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta