Google Search: A Hellscape? Nah, the Greasy Mess Is Much Bigger
January 23, 2023
I read “Google vs. ChatGPT Told by Aglio e Olio.”
The write up makes it clear that the author is definitely not Googley. Let’s look at a handful of statements and then consider them in the context of the greasy stuff and, of course, the crackling hellscape analogy. Imagine a hellscape on Shoreline Drive. I never noticed pools of flame, Beelzebug hanging in the parking lot, or people in chains being walked by one of Satan’s pals. Maybe I was not sufficiently alert?
How about this statement:
A single American company operating as a bottleneck behind the world’s information is a dangerous, and inefficient proposition and a big theme of the Margins is that monopolies are bad so it’s also on brand.
After 25 years, the light bulb clicked on and the modern Archimedes has discovered the secret to Googzilla. I recall the thrill of the Yahoo settlement and the green light for the initial public offering. I recall the sad statements of Foundem, which “found” itself going nowhere fast in search results. I recall a meeting in Paris in which comments about the difficulty of finding French tax form links in Google.fr search results. I remember the owner of a major Web site shouting at lunch about his traffic dropping from two million per month to 200,000. Ah, memories. But the reason these anecdotes come to my mind is a will group of people who found free and convenient more valuable than old-fashioned research. You remember. Lycos, libraries, conversations, and those impedimenta to actual knowledge work.
Also, how about this statement?
I am assuming the costs and the risk I’ve mentioned above has been what’s been making Google keep its cards closer to its chest.
Ah, ha. Google is risk averse. As organization become older and larger, what does one expect. I think of Google like Tom Brady or Christiano Ronaldo. Google is not able to accept the fact that it is older, has a bum knee, and has lost some of its fangs. Remember the skeleton of the dinosaur in front of one of Google’s buildings. It was, as I recall, a Tyrannosaurus Rex. But it was missing a fang or two. Then the weather changed, and the actual dino died. Google is not keeping cards closer to its chest; Google does not know what to do. Regulators are no longer afraid to fine the big reptile again and again. Googlers become Xooglers and suggest that the company is losing the zip in its step. Some choose to compete and create a for fee search system. Good luck with that! Looking at the skeleton, those cards could fall through the bones and fall, scattered on the concrete.
And what about this statement?
the real reason Google is at risk that thanks to their monopoly position, the folks over at Mountain View have left their once-incredible search experience degenerate into a spam-ridden, SEO-fueled hellscape.
Catchy. Search engine optimization, based on my observations of the Google’s antics, was a sure-fire way to get marketers into dancing the Google hand jive. Then when SEO failed (as it usually would), those SEO experts became sales professionals for Google advertising and instructors in the way to create Web sites and content shaped to follow the Google jazz band.
Net net: The Google is big, and it is not going anywhere quickly. But the past of Google is forgotten by many but it includes a Google Glass attempted suicide, making babies in the legal department, and a heroin overdose on a yacht. Ah, bad search. What about a deeper look? Nah, just focus on ChatGPT, the first of many who will now probe the soft underbelly of Googzilla. Oh, sorry, Googzilla is a skeleton. The real beast is gone.
Stephen E Arnold, January 23, 2023
How to Make Chinese Artificial Intelligence Professionals Hope Like Happy Bunnies
January 23, 2023
Happy New Year! It is the Year of the Rabbit, and the write up “Is Copyright Easting AI?” may make some celebrants happier than the contents of a red envelop. The article explains that the US legal system may derail some of the more interesting, publicly accessible applications of smart software. Why? US legal eagles and the thicket of guard rails which comprise copyright.
The article states:
… neural network developers, get ready for the lawyers, because they are coming to get you.
That means the the interesting applications on the “look what’s new on the Internet” news service Product Hunt will disappear. Only big outfits can afford to bring and fight some litigation. When I worked as an expert witness, I learned that money is not an issue of concern for some of the parties to a lawsuit. Those working as a robot repair technician for a fast food chain will want to avoid engaging in a legal dispute.
The write up also says:
If the AI industry is to survive, we need a clear legal rule that neural networks, and the outputs they produce, are not presumed to be copies of the data used to train them. Otherwise, the entire industry will be plagued with lawsuits that will stifle innovation and only enrich plaintiff’s lawyers.
I liked the word “survive.” Yep, continue to exist. That’s an interesting idea. Let’s assume that the US legal process brings AI develop to a halt. Who benefits? I am a dinobaby living in rural Kentucky. Nevertheless, it seems to me that a country will just keep on working with smart software informed by content. Some of the content may be a US citizen’s intellectual property, possibly a hard drive with data from Los Alamos National Laboratory, or a document produced by a scientific and technical publisher.
It seems to me that smart software companies and research groups in a country with zero interest in US laws can:
- Continue to acquire content by purchase, crawling, or enlisting the assistance of third parties
- Use these data to update and refine their models
- Develop innovations not available to smart software developers in the US.
Interesting, and with the present efficiency of some legal and regulatory system, my hunch is that bunnies in China are looking forward to 2023. Will an innovator use enhanced AI for information warfare or other weapons? Sure.
Stephen E Arnold, January 23, 2023
ChatGPT Spells Trouble for the Google
January 20, 2023
The estimable New York Times published “Google Calls In Help From Larry Page and Sergey Brin for A.I. Fight.” [Note: You will find this write up behind a paywall, of course.] You may know that Google is on “Red Alert” because people are (correctly or incorrectly) talking about ChatGPT as the next big thing. Nothing is more terrifying that once being a next big thing and learning that there is another next big thing. The former next big thing is caught in a phase change; therefore, discomfort replaces comfort.
The Gray Lady states:
The re-engagement of Google’s founders, at the invitation of the company’s current chief executive, Sundar Pichai, emphasized the urgency felt among many Google executives about artificial intelligence and that chatbot, ChatGPT.
Ah, ha. Re-engagement. Messrs. Brin and Page have kept a low profile. Mr. Brin manages his social life, and Mr. Page enjoys his private island. Now bang! Those clever Backrub innovators are needed by the Google senior managers.
The Google is in action mode. I have mentioned papers by Google which explain the really super duper smart software that will be racing down the Information Superhighway. Soon. Any day now. The NYT story states:
Google now intends to unveil more than 20 new products and demonstrate a version of its search engine with chatbot features this year
And the weapon of choice? A PowerPoint type of presentation. Okay. A slide deck. the promise of great things from innovators. The hitch in the git-along is that the sometimes right, sometimes wrong ChatGPT implementations are everywhere. I pointed a major Web site operator at the You.com writing function. Those folks were excited and sent me a machine generated story, saying, “With a little editing, this is really good.” Was it good? What do I know about electric Corvettes, but these people had a Eureka moment. Look up Corvette on Google and what do you get? Ads. Use You.com and what do you get, “Something that excited a big Web site owner.”
The NYT included a quote from an expert, of course. Here’s a snippet I circled:
“This is a moment of significant vulnerability for Google,” said D. Sivakumar, a former Google research director who helped found a start-up called Tonita, which makes search technology for e-commerce companies. “ChatGPT has put a stake in the ground, saying, ‘Here’s what a compelling new search experience could look like.’” Mr. Sivakumar added that Google had overcome previous challenges and could deploy its arsenal of A.I. to stay competitive.
Notice the phrase “could deploy its arsenal of A.I. to stay competitive.” To me, this Xoogler is saying, “Oh, oh. Competition but Google could take action? Could is the guts of a PowerPoint presentation. Okay, but ChatGPT is doing. No could needed.
But the NYT article includes information that Google wants to be responsible. That’s a great idea. So was solving death or flying Loon balloons over Sri Lanka. The problem is that ChatGPT has caught the eye of Google’s arch enemy Microsoft and a platoon of business types at Davos. ChatGPT caused the NYT to write an article about two rich innovators returning like Disney executives to help out a sickly looking Mickey Mouse.
I am looking forward to watching Googzilla break free of the slime mold around its paws and responding. Time to abandon the Foosball and go, go, go.
Stephen E Arnold, January 20, 2023
The LaundroGraph: Bad Actors Be On Your Toes
January 20, 2023
Now here is a valuable use of machine learning technology. India’s DailyHunt reveals, “This Deep Learning Technology Is a Money-Launderer’s Worst Nightmare.” The software, designed to help disrupt criminal money laundering operations, is the product of financial data-science firm Feedzai of Portugal. We learn:
“The Feedzai team developed LaundroGraph, a self-supervised model that might reduce the time-consuming process of assessing vast volumes of financial interactions for suspicious transactions or monetary exchanges, in a paper presented at the 3rd ACM International Conference on AI in Finance. Their approach is based on a graph neural network, which is an artificial neural network or ANN built to process vast volumes of data in the form of a graph.”
The AML (anti-money laundering) software simplifies the job of human analysts, who otherwise must manually peruse entire transaction histories in search of unusual activity. The article quotes researcher Mario Cardoso:
“Cardoso explained, ‘LaundroGraph generates dense, context-aware representations of behavior that are decoupled from any specific labels.’ ‘It accomplishes this by utilizing both structural and features information from a graph via a link prediction task between customers and transactions. We define our graph as a customer-transaction bipartite graph generated from raw financial movement data.’ Feedzai researchers put their algorithm through a series of tests to see how well it predicted suspicious transfers in a dataset of real-world transactions. They discovered that it had much greater predictive power than other baseline measures developed to aid anti-money laundering operations. ‘Because it does not require labels, LaundroGraph is appropriate for a wide range of real-world financial applications that might benefit from graph-structured data,’ Cardoso explained.”
For those who are unfamiliar but curious (like me), navigate to this explanation of bipartite graphs. The future applications Cardoso envisions include detecting other financial crimes like fraud. Since the researchers intend to continue developing their tools, financial crimes may soon become much trickier to pull off.
Cynthia Murrell, January 20, 2022
Is SkyNet a Reality or a Plot Device?
January 20, 2023
We humans must resist the temptation to outsource our reasoning to an AI, no matter how trustworthy it sounds. This is because, as iai News points out, “All-Knowing Machines Are a Fantasy.” Society is now in danger of confusing fiction with reality, a mistake that could have serious consequences. Professors Emily M. Bender and Chirag Shah observe:
“Decades of science fiction have taught us that a key feature of a high-tech future is computer systems that give us instant access to seemingly limitless collections of knowledge through an interface that takes the form of a friendly (or sometimes sinisterly detached) voice. The early promise of the World Wide Web was that it might be the start of that collection of knowledge. With Meta’s Galactica, OpenAI’s ChatGPT and earlier this year LaMDA from Google, it seems like the friendly language interface is just around the corner, too. However, we must not mistake a convenient plot device—a means to ensure that characters always have the information the writer needs them to have—for a roadmap to how technology could and should be created in the real world. In fact, large language models like Galactica, ChatGPT and LaMDA are not fit for purpose as information access systems, in two fundamental and independent ways.”
The first problem is that language models do what they are built to do very well: they produce text that sounds human-generated. Authoritative, even. Listeners unconsciously ascribe human thought processes to the results. In truth, algorithms lack understanding, intent, and accountability, making them inherently unreliable as unvetted sources of information.
Next is the nature of information itself. It is impossible for an AI to tap into a comprehensive database of knowledge because such a thing does not exist and probably never will. The Web, with its contradictions, incomplete information, and downright falsehoods, certainly does not qualify. Though for some queries a quick, straightforward answer is appropriate (how many tablespoons in a cup?) most are not so simple. One must compare answers and evaluate provenance. In fact, the authors note, the very process of considering sources helps us refine our needs and context as well as asses the data itself. We miss out on all that when, in search of a quick answer, we accept the first response from any search system. That temptation is hard enough to resist with a good old fashioned Google search. The human-like interaction with chatbots just makes it more seductive. The article notes:
“Over both evolutionary time and every individual’s lived experience, natural language to-and-fro has always been with fellow human beings. As we encounter synthetic language output, it is very difficult not to extend trust in the same way as we would with a human. We argue that systems need to be very carefully designed so as not to abuse this trust.”
That is a good point, though AI developers may not be eager to oblige. It remains up to us humans to resist temptation and take the time to think for ourselves.
Cynthia Murrell, January 20, 2023
Eczema? No, Terminator Skin
January 20, 2023
Once again, yesterday’s science fiction is today’s science fact. ScienceDaily reports, “Soft Robot Detects Damage, Heals Itself.” Led by Rob Shepherd, associate professor of mechanical and aerospace engineering, Cornell University’s Organic Robotics Lab has developed stretchable fiber-optic sensors. These sensors could be incorporated in soft robots, wearable tech, and other components. We learn:
“For self-healing to work, Shepard says the key first step is that the robot must be able to identify that there is, in fact, something that needs to be fixed. To do this, researchers have pioneered a technique using fiber-optic sensors coupled with LED lights capable of detecting minute changes on the surface of the robot. These sensors are combined with a polyurethane urea elastomer that incorporates hydrogen bonds, for rapid healing, and disulfide exchanges, for strength. The resulting SHeaLDS — self-healing light guides for dynamic sensing — provides a damage-resistant soft robot that can self-heal from cuts at room temperature without any external intervention. To demonstrate the technology, the researchers installed the SHeaLDS in a soft robot resembling a four-legged starfish and equipped it with feedback control. Researchers then punctured one of its legs six times, after which the robot was then able to detect the damage and self-heal each cut in about a minute. The robot could also autonomously adapt its gait based on the damage it sensed.”
Some of us must remind ourselves these robots cannot experience pain when we read such brutal-sounding descriptions. As if to make that even more difficult, we learn this material is similar to human flesh: it can easily heal from cuts but has more trouble repairing burn or acid damage. The write-up describes the researchers’ next steps:
“Shepherd plans to integrate SHeaLDS with machine learning algorithms capable of recognizing tactile events to eventually create ‘a very enduring robot that has a self-healing skin but uses the same skin to feel its environment to be able to do more tasks.'”
Yep, sci-fi made manifest. Stay tuned.
Cynthia Murrell, January 20, 2023
Google PR: An Explainer about Smart Software
January 19, 2023
One of Google’s big wizards packs a brain with the impact of MK 7 16? 50 caliber gun. Boom. Boom. Boom.
Google does “novel” cats. What does Chess.com’s Mittens have to say about these felines? Perhaps, Mittens makes humans move. Google makes “novel” cats sort of move. © Google, 2023.
Jeff Dean has trained his intellectual weapons on a certain viral star in the smart software universe. “Google Research, 2022 & Beyond: Language, Vision and Generative Models.” The main point of the essay / blog post / PR salvo is that Google has made transformational advances. Great things are coming from the Google.
The explanation of the hows of the great things consume about 7,000 words. For Google, that’s the equivalent of a digital War and Peace with a preface written by Henry James.
Here’s a passage which I circled in three different Googley colors:
We are working towards being able to create a single model that can understand many different modalities fluidly — understanding what each modality represents in context — and then actually generate different modes in that context. We’re excited by progress towards this goal! For example, we introduced a unified language model that can perform vision, language, question answering and object detection tasks in over 100 languages with state-of-the-art results across various benchmarks. In future applications, people can engage more senses to get computers to do what they want — e.g., “Describe this image in Swahili.” We’ve shown that on-device multi-modal models can make interacting with Google Assistant more natural. And we’ve demonstrated models that can, in various combinations, generate images, video, and audio controlled by natural language, images, and audio. More exciting things to come in this space!
Notice the phrase “progress towards this goal.” Notice the example “Describe this image in Swahili.” Notice the exclamation mark. Google is excited.
The write up includes Google’s jargonized charts and graphs; for example, “Preferred Metric Delta” and “SuperGLUE Score.” There is a graphic explaining multi-axis attention mechanism. And more.
Enough “catty” meta-commentary.
Here are several observations:
- Artificial intelligence is a fruit basket of methods, math, and malarkey. The fact that Google wants to pursue AI responsibly sounds good. What’s “responsible” mean? What’s artificial intelligence? These are difficult questions, and ones that are not addressed in the quasi-academic blog essay. Google has to sell advertising to keep the lights on and the plumbing in tip top shape… mostly. Seven thousand words is public relations, content marketing, and a response to the wild and crazy hyperbole about OpenAI changing the world. Okay, maybe after the lawyers, the regulators, the content copyright holders have figured out what is going on inside the allegedly open black boxes.
- If the reports from Davos are semi-accurate, Microsoft’s tie up with OpenAI and the idea of putting ChatGPT in Word makes me wonder if Microsoft Bob and Microsoft Clippy will return, allegedly smarter than before. Microsoft is riding a marketing wave and hoping to make money.
- Google is burdened with the albatross of Dr. Timnit Gebru and others who were transformed into former Googlers. What about Dr. Gebru’s legitimate concerns about baked in bias. When one sucks in content, the system does not know that content objects are more or less “better,” “right,” or distorted due to a spidering time out due to latency. The fact remains that Google terminated people who attempted to point out some foundational flaws in what the Google was doing.
Net net: The write up does not talk about Forward Forward methods. The write up does not talk about the likelihood that regulators in the European Union will be interested in what and how Google moves forward. Google is in the regulatory spot light. Will those regulators believe that Google can change its spots like the “novel” cats in the illustration? ChatGPT is something to get venture funders, entrepreneurs, and Davos executives to think positive thoughts. That does not mean the system will deliver. What about Mr. Brin’s self driving car prediction or the clever idea of solving death? Google may have to emulate in part Tesla, a company which allegedly faked the hands-off, full self-driving demo of its smart software. Seven thousand words means one thing to me:
‘The Google doth protest too much, methinks.’ Hamlet, Act 3, Scene 2. (I think Shakespeare put Google in a foul paper and some busybody inserted the name Gertrude.)
Boom, boom, boom.
Stephen E Arnold, January 19, 2023
Discord Resources
January 19, 2023
Among Facebook’s waning prestige, TikTok’s connections to a surveillance-loving regime, and whatever the heck is happening at Musk’s Twitter, one might be in the market for an alternative social media platform. LifeHacker suggests an option gamers have been using for years in, “How to Find Discord Servers You’ll Actually Like.” Writer Khamosh Pathak recommends checking with friends, some of whom might already be on Discord. One can also check communities or pages found on other platforms, especially Reddit. Many of them have their own Discord servers. Message them if they don’t display a public link, Pathak advises. Or simply check Discord directories. We learn:
“You can try Discord’s own discovery tool. Click the Compass icon at the bottom of the sidebar. Their Featured collection isn’t that great, but the Search tool is. Search for something that you’re interested in, or something you want to explore. Searching for ‘mechanical keyboards’ brings up 79 different servers, for example. If you want to discover something entirely different, you can use a third-party Discord server directory like Disboard, which does a great job at categorizing and tagging communities. This will help you discover up-and-coming communities in different sections like gaming, music, and more. And, of course, there’s the search function that will help you narrow down to servers with specific interests, like woodworking or ceramics. For fans of gaming and anime, Discord.Me is an even better option. While they do have a varied collection of servers, their focus is really on gaming and anime (something that will become apparent after spending more than five seconds on the page). You can read the detailed descriptions if you want, or you can click the Join Now button to directly open the community in the Discord app.”
Discord offers the familiar ability to engage in text- and meme-based conversations, but one might also enjoy talking to other humans in real time over voice channels. Check it out for a different social media experience.
Cynthia Murrell, January 21, 2023
College Student Builds App To Detect AI Written Essays: Will It Work? Sure
January 19, 2023
Artists are worried that AI algorithms will steal their jobs, but now writers are in the same boat because the same thing is happening to them! AI are now competent enough to write coherent text. Algorithms can now write simple conversations, short movie scripts, flash fiction, and even assist in the writing process. Students are also excited about the prospect of AI writing algorithms, because it means they can finally outsource their homework to computers. Or they could have done that until someone was genius enough to design an AI that detected AI-generated essays. Business Insider reports on how a college student is now the bane of the global student body: “A Princeton Student Built An App Which Can Detect If ChatGPT Wrote An Essay To Combat AI-Based Plagiarism.”
Princeton computer science major Edward Tian spent his winter holiday designing an algorithm to detect if an essay was written by the new AI writer ChatGPT. Dubbed GPTZero, Tian’s AI can correctly identify what is written by a human and what is not. GPTZero works by rating text on how perplexity, complex, and random it is written. GPTZero proved to be very popular and it crashed soon after its release. The app is now in a beta phase that people can sign-up for or they can use on Tian’s Streamlit page.
Tian’s desire to prevent AI plagiarism motivated him to design GPTZero:
“Tian, a former data journalist with the BBC, said that he was motivated to build GPTZero after seeing increased instances of AI plagiarism. ‘Are high school teachers going to want students using ChatGPT to write their history essays? Likely not,’ he tweeted.”
AI writing algorithms are still in their infancy like art generation AI. Writers should not fear job replacement yet. Artistic AI places the arts in the same place paintings were with photography, the radio with television, and libraries with the Internet. Artistic AI will change the mediums, but portions of it will persevere and others will change. AI should be used as tools to improve the process.
Students would never find and use a work-around.
Whitney Grace, January 19, 2023
MBAs Dig Up an Old Chestnut to Explain Tech Thinking
January 19, 2023
Elon Musk is not afraid to share, it is better to say tweet, about his buyout and subsequent takeover of Twitter. He has detailed how he cleared the Twitter swamp of “woke employees” and the accompanying “woke mind virus.” Musk’s actions have been described as a prime example of poor leadership skills and lauded as a return to a proper business. Musk and other rich business people see the current times as a war, but why? Vox’s article, “The 80-Year-Old Book That Explains Tech’s New Right-Wing Tilt” explains writer Antonio García Martínez:
“…who’s very plugged into the world of right-leaning Silicon Valley founders. García Martínez describes a project that looks something like reverse class warfare: the revenge of the capitalist class against uppity woke managers at their companies. ‘What Elon is doing is a revolt by entrepreneurial capital against the professional-managerial class regime that otherwise everywhere dominates (including and especially large tech companies),’ García Martínez writes. On the face of it, this seems absurd: Why would billionaires who own entire companies need to “revolt” against anything, let alone their own employees?”
García Martínez says the answer is in James Burnham’s 1941 book: The Managerial Revolution: What Is Happening In The World. Burnham wrote that the world was in late-stage capitalism, so the capitalist bigwigs would soon lose their power to the “managerial class.” These are people who direct industry and complex state operations. Burnham predicted that Nazi Germany and Soviet Russia would inevitably be the winners. He was wrong.
Burnham might have been right about the unaccountable managerial class and experts in the economy, finance, and politics declare how it is the best description of the present. Burnham said the managerial revolution would work by:
“The managerial class’s growing strength stems from two elements of the modern economy: its technical complexity and its scope. Because the tasks needed to manage the construction of something like an automobile require very specific technical knowledge, the capitalist class — the factory’s owners, in this example — can’t do everything on their own. And because these tasks need to be done at scale given the sheer size of a car company’s consumer base, its owners need to employ others to manage the people doing the technical work.
As a result, the capitalists have unintentionally made themselves irrelevant: It is the managers who control the means of production. While managers may in theory still be employed by the capitalist class, and thus subject to their orders, this is an unsustainable state of affairs: Eventually, the people who actually control the means of production will seize power from those who have it in name only.
How would this happen? Mainly, through nationalization of major industry.”
Burnham believed it was best if the government managed the economy, i.e. USSR and Nazi Germany. The authoritarian governments killed that idea, but Franklin Roosevelt laid the groundwork for an administrative state in the same vein as the New Deal.
The article explains current woke cancel culture war is viewed as a continuation of the New Deal. Managers have more important roles than the CEOs who control the money, so the CEOs are trying to maintain their relevancy and power. It could also be viewed as a societal shift towards a different work style and ethic with the old guard refusing to lay down their weapons.
Does Burnham’s novel describe Musk’s hostile and/or needed Twitter takeover? Yes and no. It depends on the perspective. It does make one wonder if big tech management are following the green light from 1651 Thomas Hobbes’ Leviathan?
Whitney Grace, January 19, 2023