Google and AMP: Good Enough

July 10, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Due to the rise of mobile devices circa the 2010s, the Internet was slammed with slow-loading Web-sites. In 2015, Google told publishers it had a solution dubbed “Accelerated Mobile Pages” (AMP). Everyone bought into AMP but it soon proved to be more like a “Speed Trap” says The Verge.

AMP worked well at first but it was hard to use advertising tools that were not from Google. Google’s plan to make the Internet great again backfired. Seventeen state attorneys filed a lawsuit with AMP as a key topic against Google in 2020. The lawsuit alleges Google purposefully designed AMP to prevent publishers from using alternative ad tools. The US Justice Department filed an antitrust lawsuit in January 2023, claiming Google is attempting to control more of the Internet.

79 googzilla

A creature named Googzilla chats with a well-known publisher about a business relationship. Googzilla is definitely impressed with the publisher’s assertion that quality news can generate traffic and revenue without a certain Web search company’s help. Does the publisher trust Googzilla? Sure, the publisher says, “We just have lunch and chat. No problem.” 

Google promised that AMP would drive more traffic to publishers’ Web sites and it would fix the loading speed lag. Google was the only big tech company that offered a viable solution to the growing demand mobile devices created, so everyone was forced to adopt AMP. Google did not care as long as it was the only player in the game:

“As long as anyone played the game, everybody had to. ‘Google’s strategy is always to create prisoner’s dilemmas that it controls — to create a system such that if only one person defects, then they win,’ a former media executive says. As long as anyone was willing to use AMP and get into that carousel, everyone else had to do the same or risk being left out.”

Google promised AMP would be open source but Google flip-flopped on that decision whenever it suited the company. Non-Google developers “fixed” AMP by working through its locked down structure so it could support other tools. Because of their efforts AMP got better and is now a decent tool. Google, however, trundles along. Perhaps Google is just misunderstood.

Whitney Grace, July 10, 2023

Amazon: Machine-Generated Content Adds to Overhead Costs

July 7, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Amazon Has a Big Problem As AI-Generated Books Flood Kindle Unlimited” makes it clear that Amazon is going to have to re-think how it runs its self-publishing operation and figure out how to deal with machine-generated books from “respected” publishers.

The author of the article is expressing concern about ChatGPT-type outputs being assembled into electronic books. That concern is focused on Amazon and its ageing, arthritic Kindle eBook business. With voice to text tools, I suppose one should think about Audible audiobooks spit out by text-to-voice. The culprit, however, may be Amazon itself. Paying a person read a book for seven hours, not screw up, and making sure the sound is acceptable when the reader has a stuffed nose can be pricey.

7 4 baffled exec

A senior Amazon executive thinks to herself, “How can I fix this fake content stuff? I should really update my LinkedIn profile too.’ Will the lucky executive charged with fixing the problem identified in the article be allowed to eliminate revenue? Yep, get going on the LinkedIn profile first. Tackle the fake stuff later.

The write up points out:

the mass uploading of AI-generated books could be used to facilitate click-farming, where ‘bots’ click through a book automatically, generating royalties from Amazon Kindle Unlimited, which pays authors by the amount of pages that are read in an eBook.

And what’s Amazon doing about this quasi-fake content? The article reports:

It [Amazon] didn’t explicitly state that it was making an effort specifically to address the apparent spam-like persistent uploading of nonsensical and incoherent AI-generated books.

Then, the article raises the issues of “quality” and “authenticity.” I am not sure what these two glory words mean. My impression is that a machine-generated book is not as good as one crafted by a subject matter expert or motivated human author. If I am right, the editors at TechRadar are apparently oblivious to the idea of using XML structure content and a MarkLogic-type tool to slice-and-dice content. Then the components are assembled into a reference book. I want to point out that this method has been in use by professional publishers for a number of years. Because I signed a confidentiality agreement, I am not able to identify this outfit. But I still recall the buzz of excitement that rippled through one officer meeting at this outfit when those listening to a presentation realized [a] Humanoids could be terminated and a reduced staff could produce more books and [b] the guts of the technology was a database, a technology mostly understood by those with a few technical conferences under their belt. Yippy! No one had to learn anything. Just calculate the financial benefit of dumping humans and figuring out how to expense the contractors who could format content from a hovel in a Myanmar-type of low-cost location. At night, the executives dreamed about their bonuses for hitting their financial targets and how to start RIF’ing editorial staff, subject matter experts, and assorted specialists who doodled with front matter, footnotes, and fonts.

Net net: There is no fix. The write up illustrates the lack of understanding about how large sections of the information industry uses technology and the established procedures for dealing with cost-saving opportunity. Quality means more revenue from decisions. Authenticity is a marketing job. Amazon has a content problem and has to gear up its tools and business procedures to cope with machine-generated content whether in product reviews and eBooks.

Stephen E Arnold, July 7, 2023

Pricing Smart Software: Buy Now Because Prices Are Going Up in 18 hours 46 Minutes and Nine Seconds, Eight Seconds, Seven…

July 7, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I ignore most of the apps, cloud, and hybrid products and services infused with artificial intelligence. As one wit observed, AI means artificial ignorance. What I find interesting are the pricing models used by some of the firms. I want to direct your attention to Sheeter.ai. The service let’s one say in natural language something like “Calculate the median of A:Z rows.” The system then spits out the Excel formula which can be pasted into a cell. The Sheeter.ai formula works in Google Sheets too because Google wants to watch Microsoft Excel shrivel and die a painful death. The benefits of the approach are similar to services which convert SQL statements into well-formed SQL code (in theory). Will the dynamic duo of Google and Microsoft implement a similar feature in their spreadsheets? Of course, but Sheeter.ai is betting their approach is better.

The innovation for which Sheeter.ai deserves a pat on the back is its approach to pricing. The screenshot below makes clear that the price one sees on the screen at a particular point in time is going to go up. A countdown timer helps boost user anxiety about price.

image

I was disappointed when the graphics did not include a variant of James Bond (the super spy) chained to an explosive device. Bond, James Bond, was using his brain to deactivate the timer. Obviously he was successful because there have been a half century of Bond, James Bond, films. He survives every time time.

Will other AI-infused products and services implement anxiety patterns to induce people to provide their name, email, and credit card? It seems in line with the direction in which online and AI businesses are moving. Right, Mr. Bond. Nine, eight, seven….

Stephen E Arnold, July 7, 2023

Step 1: Test AI Writing Stuff. Step 2: Terminate Humanoids. Will Outrage Prevent the Inevitable?

July 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I am fascinated by the information (allegedly actual factual) in “Gizmodo and Kotaku Staff Furious After Owner Announces Move to AI Content.” Part of my interest is the subtitle:

God, this is gonna be such a f***ing nightmare.

Ah, for whom, pray tell. Probably not for the owners, who may see a pot of gold at the end of the smart software rainbow; for example, Costs Minus Humans Minus Health Care Minus HR Minus Miscellaneous Humanoid costs like latte makers, office space, and salaries / bonuses. What do these produce? More money (value) for the lucky most senior managers and selected stakeholders. Humanoids lose; software wins.

72 nightmare

A humanoid writer sits at desk and wonders if the smart software will become a pet rock or a creature let loose to ruin her life by those who want a better payoff.

For the humanoids, it is hasta la vista. Assume the quality is worse? Then the analysis requires quantifying “worse.” Software will be cheaper over a time interval, expensive humans lose. Quality is like love and ethics. Money matters; quality becomes good enough.

Will, fury or outrage or protests make a difference? Nope.

The write up points out:

“AI content will not replace my work — but it will devalue it, place undue burden on editors, destroy the credibility of my outlet, and further frustrate our audience,” Gizmodo journalist Lin Codega tweeted in response to the news. “AI in any form, only undermines our mission, demoralizes our reporters, and degrades our audience’s trust.” “Hey! This sucks!” tweeted Kotaku writer Zack Zwiezen. “Please retweet and yell at G/O Media about this! Thanks.”

Much to the delight of her significant others, the “f***ing nightmare” is from the creative, imaginative humanoid Ashley Feinberg.

An ideal candidate for early replacement by a software system and a list of stop words.

Stephen E Arnold, July 5, 2023

Academics and Ethics: We Can Make It Up, Right?

July 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Bogus academic studies were already a troubling issue. Now generative text and image algorithms are turbocharging the problem. Nature describes how in, “AI Intensifies Fight Against ‘Paper Mills” that Churn Out Fake Research.” Writer Layal Liverpool states:

“Generative AI tools, including chatbots such as ChatGPT and image-generating software, provide new ways of producing paper-mill content, which could prove particularly difficult to detect. These were among the challenges discussed by research-integrity experts at a summit on 24 May, which focused on the paper-mill problem. ‘The capacity of paper mills to generate increasingly plausible raw data is just going to be skyrocketing with AI,’ says Jennifer Byrne, a molecular biologist and publication-integrity researcher at New South Wales Health Pathology and the University of Sydney in Australia. ‘I have seen fake microscopy images that were just generated by AI,’ says Jana Christopher, an image-data-integrity analyst at the publisher FEBS Press in Heidelberg, Germany. But being able to prove beyond suspicion that images are AI-generated remains a challenge, she says. Language-generating AI tools such as ChatGPT pose a similar problem. ‘As soon as you have something that can show that something’s generated by ChatGPT, there’ll be some other tool to scramble that,’ says Christopher.”

Researchers and integrity analysts at the summit brainstormed ideas to combat the growing problem and plan to publish an action plan “soon.” In a related issue, attendees agreed AI can be a legitimate writing aid but considered certain requirements, like watermarking AI-generated text and providing access to raw data.

7 23 make up data

Post-docs and graduate students make up data. MidJourney captures the camaraderie of 21st-century whiz kids rather well. A shared experience is meaningful.

Naturally, such decrees would take time to implement. Meanwhile, readers of academic journals should up their levels of skepticism considerably.

But tenure and grant money are more important than — what’s that concept? — ethical behavior for some.

Cynthia Murrell, July 4, 2023

NSO Group Restructuring Keeps Pegasus Aloft

July 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The NSO Group has been under fire from critics for the continuing deployment if its infamous Pegasus spyware. The company, however, might more resemble a different mythological creature: Since its creditors pulled their support, NSO appears to be rising from the ashes.

7 2 pegasus aloft

Pegasus continues to fly. Can it monitor some of the people who have mobile phones? Not in ancient Greece. Other places? I don’t know. MidJourney’s creative powers does not shed light on this question.

The Register reports, “Pegasus-Pusher NSO Gets New Owner Keen on the Commercial Spyware Biz.” Reporter Jessica Lyons Hardcastle writes:

“Spyware maker NSO Group has a new ringleader, as the notorious biz seeks to revamp its image amid new reports that the company’s Pegasus malware is targeting yet more human rights advocates and journalists. Once installed on a victim’s device, Pegasus can, among other things, secretly snoop on that person’s calls, messages, and other activities, and access their phone’s camera without permission. This has led to government sanctions against NSO and a massive lawsuit from Meta, which the Supreme Court allowed to proceed in January. The Israeli company’s creditors, Credit Suisse and Senate Investment Group, foreclosed on NSO earlier this year, according to the Wall Street Journal, which broke that story the other day. Essentially, we’re told, NSO’s lenders forced the biz into a restructure and change of ownership after it ran into various government ban lists and ensuing financial difficulties. The new owner is a Luxembourg-based holding firm called Dufresne Holdings controlled by NSO co-founder Omri Lavie, according to the newspaper report. Corporate filings now list Dufresne Holdings as the sole shareholder of NSO parent company NorthPole.”

President Biden’s executive order notwithstanding, Hardcastle notes governments’ responses to spyware have been tepid at best. For example, she tells us, the EU opened an inquiry after spyware was found on phones associated with politicians, government officials, and civil society groups. The result? The launch of an organization to study the issue. Ah, bureaucracy! Meanwhile, Pegasus continues to soar.

Cynthia Murrell, July 4, 2023

Databricks: Signal to MBAs and Data Wranglers That Is Tough to Ignore

June 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Do you remember the black and white pictures of the Pullman riots? No, okay. Steel worker strikes in Pittsburgh? No. Scuffling outside of Detroit auto plants? No. Those images may be helpful to get a sense of what newly disenfranchised MBAs and data wranglers will be doing in the weeks and months ahead.

Databricks Revolutionizes Business Data Analysis with AI Assistant” explains that the Databricks smart software

interprets the query, retrieves the relevant data, reads and analyzes it, and produces meaningful answers. This groundbreaking approach eliminates the need for specialized technical knowledge, democratizing data analysis and making it accessible to a wider range of users within an organization. One of the key advantages of Databricks’ AI assistant is its ability to be trained on a company’s own data. Unlike generic AI systems that rely on data from the internet, LakehouseIQ quickly adapts to the specific nuances of a company’s operations, such as fiscal year dates and industry-specific jargon. By training the AI on the customer’s specific data, Databricks ensures that the system truly understands the domain in which it operates.

6 29 angry analysts

MidJourney has delivered an interesting image (completely original, of course) depicting angry MBAs and data wranglers massing in Midtown and preparing to storm one of the quasi monopolies which care about their users, employees, the environment, and bunny rabbits. Will these professionals react like those in other management-labor dust ups?

Databricks appears to be one of the outfits applying smart software to reduce or eliminate professional white collar work done by those who buy $7 lattes, wear designer T shirts, and don wonky sneakers for important professional meetings.

 

The DEO of Databricks (a data management and analytics firm) says:

By training their AI assistant on the customer’s specific data, Databricks ensures that it comprehends the jargon and intricacies of the customer’s industry, leading to more accurate and insightful analysis.

My interpretation of the article is simple: If the Databricks’ system works, the MBA and data wranglers will be out of a job. Furthermore, my view is that if systems like Databricks works as advertised, the shift from expensive and unreliable humans will not be gradual. Think phase change. One moment you have a solid and then you have plasma. Hot plasma can vaporize organic compounds in some circumstances. Maybe MBAs and data wranglers are impervious? On the other hand, maybe not.

Stephen E Arnold, June 29, 2023

Google: Users and Its Ad Construction

June 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

In the last 48 hours, I have heard or learned about some fresh opinions about Alphabet / Google / YouTube (hereinafter AGY). Google Glass III (don’t forget the commercial version, please) has been killed. Augmented Reality? Not for the Google. Also, AGY continues to output promises about its next Bard. Is it really better than ChatGPT? And AGY is back in the games business. (Keep in mind that Google pitched Yahoo with a games deal in 2004 if I remember correctly and then flamed out with its underwhelming online game play a decade later which was followed by the somewhat forgettable Stadia game service. ) Finally, a person told me that Prabhakar Raghavan allegedly said, “We want our customers to be happy.” Inspirational indeed. I think I hit the highlights from the information I encountered since Monday, June 25, 2023.

6 28 bad foundation

The ever sensitive creator MidJourney provided this illustration of a structure with a questionable foundation. Could the construct lose a piece here and piece there until it must be dismantled to save the snail darters living in the dormers? Are the residents aware of the issue?

The fountain of Googliness seems to be copious. I read “Google Ads Can Do More for Its Customers.” The main point of the article is that:

Google’s dominance in the search engine industry, particularly in search ads, is unparalleled, making it virtually the only viable option for advertisers seeking to target search traffic. It’s a conflict of interest, as Google’s profitability is closely tied to ad revenue. As Google doesn’t do enough to make Google Ads a more transparent platform and reduce the cost for its customers, advertisers face inflated costs and fierce competition, making it challenging for smaller businesses with limited budgets to compete effectively.

Gulp. If I understand this statement, Google is exploiting its customers. Remember. These are the entities providing the money to fund AGY’s numerous administrative costs. These are going just one way: Up and up. Imagine the data center, legal fines, and litigation costs. Big numbers before adding in salaries and bonuses.

Observations:

  1. Structural weakness can be ignored until the edifice just collapses.
  2. Unhappy customers might want to drop by for a conversation and the additional weight of these humanoids may cross a tipping point.
  3. US regulators may ignore AGY, but government officials in other countries may not.

Bud Light’s adventures with its customers provide a useful glimpse of that those who are unhappy can do and do quickly. The former Bud Light marketing whiz has a degree from Harvard. Perhaps this individual can tackle the AGY brand? Just a thought.

Stephen E Arnold, June 28, 2023

Harvard University: Ethics and Efficiency in Teaching

June 28, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

You are familiar with Harvard University, the school of broad endowments and a professor who allegedly made up data and criticized colleagues for taking similar liberties with the “truth.” For more color about this esteemed Harvard professional read “Harvard Behavioral Scientist Who Studies Dishonesty Is Accused of Fabricating Data.”

Now the academic home of William James many notable experts in ethics, truth, reasoning, and fund raising has made an interesting decision. “Harvard’s New Computer Science Teacher Is a Chatbot.”

6 24 robot teach3er fixed

A terrified 17 year old from an affluent family in Brookline asks, “Professor Robot, will my social acceptance score be reduced if I do not understand how to complete the programming assignment?” The inspirational image is an output from the copyright compliant and ever helpful MidJourney service.

The article published in the UK “real” newspaper The Independent reports:

Harvard University plans to use an AI chatbot similar to ChatGPT as an instructor on its flagship coding course.

The write up adds:

The AI teaching bot will offer feedback to students, helping to find bugs in their code or give feedback on their work…

Once installed and operating, the chatbot will be the equivalent of a human teaching students how to make computers do what the programmer wants? Hmmm.

Several questions:

  1. Will the Harvard chatbot, like a living, breathing Harvard ethics professor make up answers?
  2. Will the Harvard chatbot be cheaper to operate than a super motivated, thrillingly capable adjunct professor, graduate student, or doddering lecturer close to retirement?
  3. Why does an institution like Harvard lack the infrastructure to teach humans with humans?
  4. Will the use of chatbot output code be considered original work?

But as one maverick professors keeps saying, “Just getting admitted to a prestigious university punches one’s employment ticket.”

That’s the spirit of modem education. As William James, a professor from a long and dusty era said:

The world we see that seems so insane is the result of a belief system that is not working. To perceive the world differently, we must be willing to change our belief system, let the past slip away, expand our sense of now, and dissolve the fear in our minds.

Should students fear algorithms teaching them how to think?

Stephen E Arnold, June 28, 2023

Dust Up: Social Justice and STEM Publishing

June 28, 2023

Are you familiar with “social justice warriors?” These are people who. Take it upon themselves to police the world for their moral causes, usually from a self-righteous standpoint. Social justice warriors are also known my the acronym SJWs and can cross over into the infamous Karen zone. Unfortunately Heterodox STEM reports SJWs have invaded the science community and Anna Krylov and Jay Tanzman discussed the issue in their paper: “Critical Social Justice Subverts Scientific Publishing.”

SJWs advocate for the politicization of science, adding an ideology to scientific research also known as critical social justice (CSJ). It upends the true purpose of science which is to help and advance humanity. CSJ adds censorship, scholarship suppression, and social engineering to science.

Krylov and Tanzmans’ paper was presented at the Perils for Science in Democracies and Authoritarian Countries and they argue CSJ harms scientific research than helps it. They compare CSJ to Orwell’s fictional Ministry of Love; although real life examples such as Josef Goebbels’s Nazi Ministry of Propaganda, the USSR’s Department for Agitation and Propaganda, and China’s authoritarian regime work better. CSJ is the opposite of the Enlightenment that liberated human psyches from religious and royal dogmas. The Enlightenment engendered critical thinking, the scientific process, philosophy, and discovery. The world became more tolerant, wealthier, educated, and healthier as a result.

CSJ creates censorship and paranoia akin to tyrannical regimes:

“According to CSJ ideologues, the very language we use to communicate our findings is a minefield of offenses. Professional societies, universities, and publishing houses have produced volumes dedicated to “inclusive” language that contain long lists of proscribed words that purportedly can cause offense and—according to the DEI bureaucracy that promulgates these initiatives—perpetuate inequality and exclusion of some groups, disadvantage women, and promote patriarchy, racism, sexism, ableism, and other isms. The lists of forbidden terms include “master database,” “older software,” “motherboard,” “dummy variable,” “black and white thinking,” “strawman,” “picnic,” and “long time no see” (Krylov 2021: 5371, Krylov et al. 2022: 32, McWhorter 2022, Paul 2023, Packer 2023, Anonymous 2022). The Google Inclusive Language Guide even proscribes the term “smart phones” (Krauss 2022). The Inclusivity Style  Guide of the American Chemical Society (2023)—a major chemistry publisher of more than 100 titles—advises against using such terms as “double blind studies,” “healthy weight,” “sanity check,” “black market,” “the New World,” and “dark times”…”

New meanings that cause offense are projected onto benign words and their use is taken out of context. At this rate, everything people say will be considered offensive, including the most uncontroversial topic: the weather.

Science must be free from CSJ ideologies but also corporate ideologies that promote profit margins. Examples from American history include, Big Tobacco, sugar manufacturers, and Big Pharma.

Whitney Grace, June 28, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta