Google the Great Brings AI to Message Searches

July 25, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

AI is infiltrating Gmail users’ inboxes. Android Police promises, “Gmail’s New Machine Learning Models Will Improve your Search Results.” Writer Chethan Rao points out this rollout follows June’s announcement of the Help me write feature, which deploys an algorithm to compose one’s emails. He describes the new search tool:

“The most relevant search results are listed under a section called Top results after this update. The rest of them will be listed beneath All results in mail, with these being filtered based on recency, according to the Workspace Blog. Google says this would let people find what they’re looking for ‘with less effort.’ Expanding on the methodology a little bit, the company said (via 9to5Google) its machine learning models will take into account the search term itself, in addition to the most recent emails and ‘other relevant factors’ to pull up the results best suited for the user. The functionality has just begun rolling out this Friday [May 02, 2023], so it could take a couple of weeks before making it to all Workspace or personal Google account holders. Luckily, there are no toggles to enable this feature, meaning it will be automatically enabled when it reaches your device.”

“Other relevant factors.” Very transparent. Kind of them to eliminate the pesky element of choice here. We hope the system works better that Gmail’s recent blue checkmark system (how original), which purported to mark senders one can trust but ended up doing the opposite.

Buckle up. AI will be helping you in every Googley way.

Cynthia Murrell, July 25, 2023

And Now Here Is Sergey… He Has Returned

July 24, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I am tempted to ask one of the art generators to pump out an image of the Terminator approaching the executive building on Shoreline Drive. But I won’t. I also thought of an image of Clint Eastwood, playing the role of the Man with No Name, wearing a ratty horse blanket to cover his big weapon. But I won’t. I thought of Tom Brady joining the Tampa Bay football team wearing a grin and the full Monte baller outfit. But I won’t. Assorted religious images flitted through my mind, but I knew that if I entered a proper name for the ace Googler and identified a religious figure, MidJourney would demand that I interact with a “higher AI.” I follow the rules, even wonky ones.

7 21 gun fighter

The gun fighter strides into the developer facility and says, “Drop them-thar Foosball handles. We are going to make that smart software jump though hoops. One of the champion Foosballers sighs, “Welp. Excuse me. I have to call my mom and dad. I feel nauseous.” MidJourney provided the illustration for this dramatic scene. Ride ‘em, code wrangler.

I will simply point to “Sergey Brin Is Back in the Trenches at Google.” The sub-title to the real news story is:

Co-founder is working alongside AI researchers at tech giant’s headquarters, aiding efforts to build powerful Gemini system.

I love the word “powerful.” Titan-esque, charged with meaning, and pumped up as the theme from Rocky plays softly in the background, syncopated with the sound of clicky keyboards.

Let’s think about what the return to Google means?

  1. The existing senor management team are out of ideas. Microsoft stumbles forward, revealing ways to monetize good enough smart software. With hammers from Facebook and OpenAI, the company is going to pound hard for subscription upsell revenue. Big companies will buy… Why? Because … Microsoft.
  2. Mr. Brin is a master mechanic. And the new super smart big brain artificial intelligence unit (which is working like a well oiled Ferrari with two miles on the clock) is due for an oil change, new belts, and a couple of electronic sensors once the new owner get the vehicle to his or her domicile. Ferrari knows how to bill for service, even if the zippy machine does not run like a five year old Toyota Tundra.
  3. Mr. Brin knows how to take disparate items and glue them together. He and his sidekick did it with Web search, adding such me-too innovations as GoTo, Overture, Yahoo-inspired online pay-to-play ideas. Google’s brilliant Bard needs this type of bolt ons. Mr. Brin knows bolt ons. Clever, right?

Are these three items sufficiently umbrella-like to cover the domain of possibilities? Of course not. My personal view is that item one, management’s inability to hit a three point shot, let alone a slam dunk over Sam AI-Man, requires the 2023 equivalent of asking Mom and Dad to help. Some college students have resorted to this approach to make rent, bail, or buy food.

The return is not yet like Mr. Terminator’s, Mr. Man-with-No-Name’s, or Mr. Brady’s. We have something new. A technology giant with billions in revenue struggling to get its big tractor out of a muddy field. How does one get the Google going?

“Dad, hey it’s me. I need some help.”

Stephen E Arnold, July 24, 2023

Citation Manipulation: Fiddling for Fame and Grant Money Perhaps?

July 24, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

A fact about science and academia is that these fields are incredibly biased. Researchers, scientists, and professors are always on the hunt for funding and prestige. While these professionals state they uphold ethical practices, they are still human. In other words, they violate their ethics for a decent reward. Another prize for these individuals is being published, but even publishers are becoming impartial says Nature in, “Researchers Who Agree To Manipulate Citations Are More Likely To Get Their Papers Published.”

7 22 rigging die

A former university researcher practices his new craft: Rigging die for gangs running crap games. He said to my fictional interviewer, “The skills are directly transferable. I use die manufactured by other people. I manipulate them. My degrees in statistics allow me to calculate what weights are needed to tip the odds. This new job pays well too. I do miss the faculty meetings, but the gang leaders often make it clear that if I need anything special, those fine gentlemen will accommodate my wishes.” MidJourney seems to have an affinity for certain artistic creations like people who create loaded dice.

A recent study from Research Policy discovered that researchers are coerced by editors to include superfluous citations in their papers. Those that give into the editors have a higher chance of getting published. If the citations are relevant to the researchers’ topic, what is the big deal? The problem is that the citations might not accurately represent the research nor augment the original data. There is also the pressure to comply with industry politics:

“When scientists are coerced into padding their papers with citations, the journal editor might be looking to boost either their journal’s or their own citation counts, says study author Eric Fong, who studies research management at the University of Alabama in Huntsville. In other cases, peer reviewers might try to persuade authors to cite their work. Citation rings, in which multiple scholars or journals agree to cite each other excessively, can be harder to spot, because there are several stakeholders involved, instead of just two academics disproportionately citing one another.”

The study is over a decade old, but its results pertain to today’s scientific and academia environment. Academic journals want to inflate their citations to “justify” their importance to the industry and maybe even keeping the paywall incentive. Researchers are also pressured to add more authors, because it helps someone pad their resume.

These are not good practices to protect science and academia’s’ integrity, but it is better than lying about results.

Whitney Grace, July 24, 2023

AI Commitments: But What about Chipmunks and the Bunny Rabbits?

July 23, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI sent executives to a meeting held in “the White House” to agree on some ground rules for “artificial intelligence.” AI is available from a number of companies and as free downloads as open source. Rumors have reached me suggesting that active research and development are underway in government agencies, universities, and companies located in a number of countries other than the U.S. Some believe the U.S. is the Zoe of AI, assisted by Naiads. Okay, but you know those Greek gods can be unpredictable.

Thus, what’s a commitment? I am not sure what the word means today. I asked You.com, a smart search system to define the term for me. The system dutifully return this explanation:

commitment is defined as “an agreement or pledge to do something in the future; the state or an instance of being obligated or emotionally impelled; the act of committing, especially the act of committing a crime.” In general, commitment refers to a promise or pledge to do something, often with a strong sense of dedication or obligation. It can also refer to a state of being emotionally invested in something or someone, or to the act of carrying out a particular action or decision.

Several words and phrases jumped out at me; namely, “do something in the future.” What does “do” mean? What is “the future?” Next week, next month, a decade from a specific point in time, etc.? “Obligated” is an intriguing word. What compels the obligation? A threat, a sense of duty, and understanding of a shared ethical fabric? “Promise” evokes a young person’s statement to a parent when caught drinking daddy’s beer; for example, “Mom, I promise I won’t do that again.” The “emotional” investment is an angle that reminds me that 40 to 50 percent of first marriages end in divorce. Commitments — even when bound by social values — are flimsy things for some. Would I fly on a commercial airline whose crash rate was 40 to 50 percent? Would you?

7 23 broken window

“Okay, we broke the window? Now what do we do?” asks the leader of the pack. “Run,” says the brightest of the group. “If we are caught, we just say, “Okay, we will fix it.” “Will we?” asks the smallest of the gang. “Of course not,” replies the leader. Thanks MidJourney, you create original kid images well.

Why make any noise about commitment?

I read “How Do the White House’s A.I. Commitments Stack Up?” The write up is a personal opinion about an agreement between “the White House” and the big US players in artificial intelligence. The focus was understandable because those in attendance are wrapped in the red, white, and blue; presumably pay taxes; and want to do what’s right, save the rain forest, and be green.

Some of the companies participating in the meeting have testified before Congress. I recall at least one of the firms’ senior managers say, “Senator, thank you for that question. I don’t know the answer. I will have my team provide that information to you…” My hunch is that a few of the companies in attendance at the White House meeting could use the phrase or a similar one at some point in the “future.”

The table below lists most of the commitments to which the AI leaders showed some receptivity. The table presents the commitments in the left hand column and the right hand column offers some hypothesized reactions from a nation state quite opposed to the United States, the US dollar, the hegemony of US technology, baseball, apple pie, etc.

Commitments Gamed Responses
Security testing before release Based on historical security activities, not to worry
Sharing AI information Let’s order pizza and plan a front company based in Walnut Creek
Protect IP about models Let’s canvas our AI coders and pick some to get jobs at these outfits
Permit pentesting Yes, pentesting. Order some white hats with happy faces
Tell users when AI content is produced Yes, let’s become registered users. Who has a cousin in Mountain View?
Report about use of the AI technologies Make sure we are on the mailing list for these reports
Research AI social risks Do we own a research firm? Can we buy the research firm assisting these US companies?
Use AI to fix up social ills What is a social ill? Call the general, please, and ask.

The PR angle is obvious. I wonder if commitments will work. The firms have one objective; that is, meet the expectations of their stakeholders. In order to do that, the firms must operate from the baseline of self-interest.

Net net: A plot of techno-land now have a few big outfits working and thinking hard how to buy up the best plots. What about zoning, government regulations, and doing good things for small animals and wild flowers? Yeah. No problem.

Stephen E Arnold, July 23, 2023

Silicon Valley and Its Busy, Busy Beavers

July 21, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Several stories caught my attention. These are:

7 21 beavers

Google’s busy beavers have been active: AI, pricing tactics, quantum goodness, and team building. Thanks, MidJourney but you left out the computing devices which no high value beaver goes without.

Google has allowed its beavers to gnaw on some organic material to build some dams. Specifically, the newspapers which have been affected by Google’s online advertising (no I am not forgetting Craigslist.com. I am just focusing on the Google at the moment) can avail themselves of AI. The idea is… cost cutting. Could there be some learnings for the Google? What I mean is that such a series of tests or trials provides the Google with telemetry. Such telemetry allows the Google to refine its news writing capabilities. The trajectory of such knowledge may allow the Google to embark on its own newspaper experiment. Where will that lead? I don’t know, but it does not bode well for real journalists or some other entities.

The YouTube price increase is positioned as a better experience. Could the sharp increase in ads before, during, and after a YouTube video be part of a strategy? What I am hypothesizing is that more ads will force users to pay to be able to watch a YouTube video without being driven crazy by ads for cheap mobile, health products, and gun belts? Deteriorating the experience allows a customer to buy a better experience. Could that be semi-accurate?

The quantum supremacy thing strikes me as 100 percent PR with a dash of high school braggadocio. The write up speaks to me this way: “I got a higher score on the SAT.” Snort snort snort. The snorts are a sound track to putting down those whose machines just don’t have the right stuff. I wonder if this is how others perceive the article.

And the busy beavers turned up at the White House. The beavers say, “We will be responsible with this AI stuff.  We AI promise.” Okay, I believe this because I don’t know what these creatures mean when the word “responsible” is used. I can guess, however.

Net net: The ethicist from Harvard and the soon-to-be-former president of Stanford are available to provide advisory services. Silicon Valley is a metaphor for many good things, especially for the companies and their senior executives. Life will get better and better with certain high technology outfits running the show, pulling the strings, and controlling information, won’t it?

Stephen E Arnold, July 21, 2023

TikTok: Ever Innovative and Classy Too

July 21, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I have no idea if the write up is accurate. Without doing any deep thinking or even cursory research, the story seems so appropriate for our media environment. (I almost typed medio ambiente. Yikes. The dinobaby is really old on this hot Friday afternoon.)

7 14 entrepreneur in money

“Of course, I share money from videos of destitute people crying in inclement weather. It is the least I can do. I am working on a feature film now,” says the brilliant innovator who has his finger on the pulse of the TikTok viewer. The image of this paragon popped out of the MidJourney microwave quickly.

Here’s the title: “People on TikTok Are Paying Elderly Women to Sit in Stagnant Mud for Hours and Cry.” Yes, that’s the story. The write up states as actual factual:

Over hours, sympathetic viewers send “coins” and gifts that can be exchanged for cash, amounting to several hundred dollars per stream, says Sultan Akhyar, the man credited with inventing the trend. Emojis of gifts, roses, and well-wishes float up gently from the bottom of the live feed. The viral phenomenon known as mandi lumpur, or “mud baths,” gained notoriety in January when several livestreams were posted from Setanggor village …

Three quick observations:

  1. The classy vehicle for this entertainment is TikTok.
  2. Money is involved and shared immediately. Yep, immediately.
  3. Live video, the entertainment of the here-and-now.

I am waiting for the next innovation that takes crying in the mud to another level.

Stephen E Arnold, July 21, 2023

Yet Another Way to Spot AI Generated Content

July 21, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The dramatic emergence of ChatGPT has people frantically searching for ways to distinguish AI-generated content from writing by actual humans. Naturally, many are turning to AI solutions to solve an AI problem. Some tools have been developed that detect characteristics of dino-baby writing, like colloquialisms and emotional language. Unfortunately for the academic community, these methods work better on Reddit posts and Wikipedia pages than academic writings. After all, research papers have employed a bone-dry writing style since long before the emergence of generative AI.

7 16 which teacup

Which tea cup is worth thousands and which is a fabulous fake? Thanks, MidJourney. You know your cups or you are in them.

Cell Reports Physical Science details the development of a niche solution in the ad article, “Distinguishing Academic Science Writing from Humans or ChatGPT with Over 99% Accuracy Using Off-the-Shelf Machine Learning Tools.” We learn:

“In the work described herein, we sought to achieve two goals: the first is to answer the question about the extent to which a field-leading approach for distinguishing AI- from human-derived text works effectively at discriminating academic science writing as being human-derived or from ChatGPT, and the second goal is to attempt to develop a competitive alternative classification strategy. We focus on the highly accessible online adaptation of the RoBERTa model, GPT-2 Output Detector, offered by the developers of ChatGPT, for several reasons. It is a field-leading approach. Its online adaptation is easily accessible to the public. It has been well described in the literature. Finally, it was the winning detection strategy used in the two most similar prior studies. The second project goal, to build a competitive alternative strategy for discriminating scientific academic writing, has several additional criteria. We sought to develop an approach that relies on (1) a newly developed, relevant dataset for training, (2) a minimal set of human-identified features, and (3) a strategy that does not require deep learning for model training but instead focuses on identifying writing idiosyncrasies of this unique group of humans, academic scientists.”

One of these idiosyncrasies, for example, is a penchant for equivocal terms like “but,” “however,” and “although.” Developers used the open source XGBoost software library for this project. The write-up describes the tool’s development and results at length, so navigate there for those details. But what happens, one might ask, the next time ChatGPT levels up? and the next? and so on? We are assured developers have accounted for this game of cat and mouse and will release updated tools quickly each time the chatbot evolves. What a winner—for the marketing team, that is.

Cynthia Murrell, July 21, 2023

Threads: Maybe Bad Fuel?

July 20, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Is this headline sad? “Threads Usage Drops By Half From Initial Surge” reports that the alleged cage fighters’ social messaging systems are in flux.

7 18 looks dead to me

“That old machine sure looks dead to me,” observes the car owner to his associates. One asks, “Can it be fixed?” The owner replies, “I hope not.” MidJourney deserves a pat on its digital head for this art work, doesn’t it.

This week it is the Zuckbook in decline. The cited article reports:

On its best day, July 7, Threads had more than 49 million daily active users on Android, worldwide, according to Similarweb estimates. That’s about 45% of the usage of Twitter, which had more than 109 million active Android users that day. By Friday, July 14, Threads was down to 23.6 million active users, or about 22% of Twitter’s audience.

The message is, “Threads briefly captured a big chunk of Twitter’s market.”

The cited article adds some sugar to the spoiled cake:

If Threads succeeds vs Twitter, the Instagram edge will be a big reason.

Two outstanding services. Two outstanding leaders. Can the social messaging sector pick a winner? Does anyone wonder how much information influence the winner will have?

I do. Particularly when the two horses in the race Musk from Beyond and Zuck the Muscular.

Stephen E Arnold, July 20, 2023

Grasping at Threads and Missing Its Potential for Weaponized Information Delivery

July 20, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

My great grandmother, bless her, used to draw flowers in pots. She used Crayola crayons. Her “style” reminded me of a ball of tangled threat. Examples of her work are lost to time. But MidJourney produced this image which is somewhat similar to how she depicted blooms:

7 15 thread ball

This is a rather uninspiring ball of thread generated by the once-creative MidJourney. Perhaps the system is getting tired?

Keep in mind I am recounting what I recall when I was in grade school in the early 1950s. I thought of these tangled images when I read “Engagement on Instagram’s Threads Has Cratered.” The article suggests that users are losing interest in the Zuck’s ball of thread. I noted this statement:

Time spent on the app dropped over 50% from 20 minutes to 8 minutes, analysts found.

I have spent some time with analysts in my career. I know that data can be as malleable as another toy in a child’s box of playthings; specifically, the delightfully named and presumably non-toxic Play-Doh.

The article offers this information too:

Threads was unveiled as Meta’s Twitter killer and became available for download in the U.S. on July 5, and since then, the platform has garnered well over 100 million users, who are able to access it directly from Instagram. The app has not come without its fair share of issues, however.

Threads — particularly when tangled up — can be a mess. But the Zuckbook has billions of users of its properties. A new service  taps an installed base and has a trampoline effect. When I was young, trampolines were interesting for a short period of time. The article is not exactly gleeful, but I detected some negativity toward the Zuck’s most recent innovation in me-too technology.

Now back to my great-grandmother (bless her, of course). She took the notion of tangled thread and converted them into flower blossoms. My opinion is that Threads will become another service used by actors less benign that my great-grandmother (bless her again). The ability to generate weaponized information, link to those little packets of badness, and augment other content is going to be of interest to some entities.

A free social media service can deliver considerable value to a certain segment of online users. The Silicon Valley “real” news folks may be writing about threads to say, “The Zuck’s Thread service is a tangled mess.” The more important angle, in my opinion, is that it provides another, possibly quite useful service to those who seek to cause effects not nearly as much fun as saying, “Zuck’s folly flops.” It may, but in the meantime, Threads warrants close observation, not Play-Doh data. Perhaps those wrestling with VPN bans will explore technical options for bypassing packet inspection, IP blocks, port blocks, Fiverr gig workers, or colleagues in the US?

Stephen E Arnold, July 20, 2023

Will AI Replace Interface Designers? Sure, Why Not?

July 20, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Lost in the flaming fluff surrounding generative AI is one key point: Successful queries require specialized expertise. A very good article from the Substack blog Public Experiments clearly explains why “Natural Language Is an Unnatural Interface.”

7 16 modern interface

A modern, intuitive, easy-to-use interfaces. That’s the ticket, MidJourney. Thanks for the output.

We have been told to approach ChatGPT and similar algorithms as we would another human. That’s persuasive marketing but terrible advice. See the post for several reasons this is so (beyond the basic fact that AIs are not humans.) Instead, advises writer Varun Shenoy, developers must create user-friendly interfaces that carry one right past the pitfalls. He explains:

“An effective interface for AI systems should provide guardrails to make them easier for humans to interact with. A good interface for these systems should not rely primarily on natural language, since natural language is an interface optimized for human-to-human communication, with all its ambiguity and infinite degrees of freedom. When we speak to other people, there is a shared context that we communicate under. We’re not just exchanging words, but a larger information stream that also includes intonation while speaking, hand gestures, memories of each other, and more. LLMs unfortunately cannot understand most of this context and therefore, can only do as much as is described by the prompt. Under that light, prompting is a lot like programming. You have to describe exactly what you want and provide as much information as possible. Unlike interacting with humans, LLMs lack the social or professional context required to successfully complete a task. Even if you lay out all the details about your task in a comprehensive prompt, the LLM can still fail at producing the result that you want, and you have no way to find out why. Therefore, in most cases, a ‘prompt box’ should never be shoved in a user’s face. So how should apps integrate LLMs? Short answer: buttons.”

Users do love buttons. And though this advice might seem like an oversimplification, Shenoy observes most natural-language queries fall into one of four categories: summarization, simple explanations, multiple perspectives, and contextual responses. The remaining use cases are so few he is comfortable letting ChatGPT handle them. Shenoy points to GitHub Copilot as an example of an effective constrained interface. He feels so strongly about the need to corral queries he expects such interfaces will be *the* products of the natural language field. One wonders—when will such a tool pop up in the MS Office Suite? And when it does, will the fledgling Prompt Engineering field become obsolete before it ever leaves the nest?

Cynthia Murrell, July 20, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta