Amazon: Great Products and Transparent Pricing Impress
October 24, 2023
This essay is the work of a dumb humanoid. No smart software required.
Free SaaS trials are supposed to demonstrate a software’s capabilities and benefits to convince the user to subscribe. Sometimes free trials require users to input their billing information. If users aren’t careful, they’re charged for the SaaS. Reddit user Mizcizi had a bad experience when he signed up for AWS, read his post, “1k Bill After 1 Month, For The Service I Didn’t Even Use.”
Mizcizi signed up for a free AWS trial to test its Web hosting. He tried AWS Amplify but didn’t like it. He still wanted to use AWS S3 for storage and everything was going well for a while then the problems started. First, the data couldn’t be verified, next the account was suspended. He ignored the issues because he used another storage service.
AWS via RDS then slapped him with a $1000 bill for 280.233 Hrs, 1,129.972 IOPS-Mo, and 150.663 GB-Mo. Here are more details:
“Now there are a few things wrong with this. At first, I don’t remember setting up any RDS service. I might have checked what it provides because I was also checking for a DB hosting at the time, so I’m not 100% about that. What I am 100% sure is that I never used RDS anywhere, so I don’t know where all their IOPS are coming from. One thing that also doesn’t make sense is the 280.233 Hrs resulting in 391.77 USD. In the free trial for RDS, it says that you get 750 free hours.”
Mizcizi is trying to work with AWS support. Because he’s a first time user they may probably wipe the bill. He could also cancel the payment through his credit card. Other comments offered suggestions like setting up bill notifications, opening a support case, and explaining how the charges racked up.
Many comments said that AWS allegedly overcharges some users and recommending services for novice tech developers. Then the invoice arrives. Yipes.
Whitney Grace, October 24, 2023
Publishers and Remora: Choose the Right Host and Stop Complaining, Please
October 20, 2023
This essay is the work of a dumb humanoid. No smart software involved.
Today, let’s reflect on the suckerfish or remora. The fish attaches itself to a shark and feeds on scraps of the host’s meals or nibbles on the other parasites living on their food truck. Why think about a fish with a sucking disk on its head?
Navigate to “Silicon Valley Ditches News, Shaking an Unstable Industry.” The article reports as “real” news:
Many news companies have struggled to survive after the tech companies threw the industry’s business model into upheaval more than a decade ago. One lifeline was the traffic — and, by extension, advertising — that came from sites like Facebook and Twitter. Now that traffic is disappearing.
Translation: No traffic, no clicks. No clicks and no traffic mean reduced revenue. Why? The days of printed newspapers and magazines are over. Forget the costs of printing and distributing. Think about people visiting a Web site. No traffic means that advertisers will go where the readers are. Want news? Fire up a mobile phone and graze on the information available. Sure, some sites want money, but most people find free services. I like France24.com, but there are options galore.
Wikipedia provides a snap of a remora attached to a scuba diver. Smart remora hook on to a fish with presence.
The shift in content behavior has left traditional publishing companies with a challenge: Generating revenue. Newspapers specialized news services have tried a number tactics over the years. The problem is that the number of people who will pay for content is large, but finding those people and getting them to spit out a credit card is expensive. At the same time, the cost of generating “real” news is expensive as well.
In 1992, James B. Twitchell published Carnival Culture: The Trashing of Taste in America. The book offered insight into how America has embraced showmanship information. Dr. Twitchell’s book appeared 30 years ago. Today Google, Meta, and TikTok (among other digital first outfits) amplify the lowest common denominator of information. “Real” publishing aimed higher.
The reluctant adjustment by “white shoe” publishing outfits was to accept traffic and advertising revenue from users who relied on portable surveillance devices. Now the traffic generators have realized that “attention magnet” information is where the action is. Plus smart software operated by do-it-yourself experts provides a flow of information which the digital services can monetize. A digital “mom” will block the most egregious outputs. The goal is good enough.
The optimization of content shaping now emerging from high-technology giants is further marginalizing the “real” publishers.
Almost 45 years ago, the president of a company with a high revenue online business database asked me, “Do you think we could pull our service off the timesharing vendors and survive?” The idea was that a product popular on an intermediary service could be equally popular as a standalone commercial digital product.
I said, “No way.”
The reasons were obvious to me because my team had analyzed this question over the hill and around the barn several times. The intermediary aggregated information. Aggregated information acts like a magnet. A single online information resource does not have the same magnetic pull. Therefore, the cost to build traffic would exceed the financial capabilities of the standalone product. That’s why commercial database products were rolled up by large outfits like Reed Elsevier and a handful of other companies.
Maybe the fix for the plight of the New York Times and other “real” publishers anchored in print is to merge and fast. However, further consolidation of newspapers and book publishers takes time. As the New York Times “our hair is on fire” article points out:
Privately, a number of publishers have discussed what a post-Google traffic future may look like, and how to better prepare if Google’s A.I. products become more popular and further bury links to news publications… “Direct connections to your readership are obviously important,” Ms. LaFrance [Adrienne LaFrance, the executive editor of The Atlantic] said. “We as humans and readers should not be going only to three all-powerful, attention-consuming mega platforms to make us curious and informed.” She added: “In a way, this decline of the social web — it’s extraordinarily liberating.”
Yep, liberating. “Real” journalists can do TikToks and YouTube videos. A tiny percentage will become big stars and make big money until they don’t. The senior managers of “shaky” “real” publishing companies will innovate. Unfortunately start ups spawned by “real” publishing companies face the same daunting odds of any start up: A brutal attrition rate.
Net net: What will take the place of the old school approach to newspapers, magazines, and books. My suggestion is to examine smart software and the popular content on YouTube. One example is the MeidasTouch “network” on YouTube. Professional publishers take note. Newspaper and magazine publishers may also want to look at what Ben Meiselas and cohorts have achieved. Want a less intellectual approach to information dominance, ask a teenager about TikTok.
Yep, liberating because some of those in publishing will have to adapt because when X.com or another high technology alleged monopoly changes direction, the sucker fish has to go along for the ride or face a somewhat inhospitable environment, hunger, and probably a hungry predator from a bottom feeding investment group.
Stephen E Arnold, October 20, 2023
Innovation: Perhaps Keep an Eye Open for Non US Players?
October 20, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
“40 Companies That Are Beating the West” contains thumbnail descriptions of firms RestOfWorld.org believes are winning the hearts and minds of users. The losers, according to the write up, are in Silicon Valley and Western Europe. I am not convinced that the companies profiled are winners, but some are. Of interest to me and my research team are the comments about a handful of companies; namely:
- Binance, crypto which to me suggests a service designed to appeal to a certain slice of humanity
- ByteDance, a China fave and super conduit for shaped messages and vacuum pump for obtaining useful data
- Telegram Messenger, a super app for interesting applications
- Tencent, a China fave.
In my lectures to a law enforcement group last week, I mentioned several non-US outfits in the policeware and intelware sector. RestOfWorld.org did not include those in its round up.
The snapshots are interesting, but the ones I listed above are definite companies to monitor.
Stephen E Arnold, October 20, 2023
.
OpenAI Dips Its Toe in Dark Waters
October 20, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Facebook, TikTok, YouTube, Instagram, and other social media platforms have exacerbated woke and PC culture. It’s gotten to the point where everyone and everything are viewed as offensive. Even AI assistants aka chatbots are being programmed with censorship. OpenAI designed the Chat GPT assistant and the organization is constantly upgrading the generative text algorithm. OpenAI released a white paper about upgrading version four of Chat GPT: “GPT-4V(ision) System Card.”
GPT-4V relies on large language models (LLMs) to expand its knowledge base to solve new problems and prompts. OpenAI used publicly available data and licensed sources to train GPT-4V then refined it with human feedback. The paper explains that while GPT-4V was proficient in many areas it severely lacked in presented factual information.
OpenAI tested GPT-4V’s ability to replicate scientific and medical information. Unfortunately GPT-4V continued to stereotype and offer ungrounded inferences from text and images as AI algorithms have proven to do in many cases. The biggest concern is that Chat GPT’s latest upgrade will be utilized to spread disinformation:
“As noted in the GPT-4 system card, the model can be used to generate plausible realistic and targeted text content. When paired with vision capabilities, image and text content can pose increased risks with disinformation since the model can create text content tailored to an image input. Previous work has shown that people are more likely to believe true and false statements when they’re presented alongside an image, and have false recall of made up headlines when they are accompanied with a photo. It is also known that engagement with content increases when it is associated with an image.”
After GPT-4V was tested on multiple tasks it failed to accurately convey information. GPT-4V has learned to interpret data through a warped cultural lens and is a reflection of the Internet. It lacks nuance to understand gray areas despite OpenAI’s attempts to enhance the AI’s capabilities.
OpenAI is implementing censorship protocols to dispel harmful prompts; that is, GPT-4V won’t respond to sexist and racist tasks. It’s similar to how YouTube blocks videos that contain trigger or “stop” words: Gun, death, etc. OpenAI is proactively preventing bad actors from using Chat GPT as a misinformation tool. But bad actors are smart and will design their own AI chatbot to skirt around censorship. They’ll see it as a personal challenge and will revel when they succeed.
Then what will OpenAI do?
Whitney Grace, October 20, 2023
AI Becomes the Next Big Big Thing with New New Jargon
October 19, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
“The State of AI Engineering” is a jargon fiesta. Note: The article has a pop up that wants the reader to subscribe, which is interesting. The approach is similar to meeting a company rep at a trade show booth and after reading the signage, saying to the rep, “Hey, let’s do a start up together right now.) The main point of the article is to provide some highlights from the AI Summit Conference. Was there much “new” new? Judging from the essay, the answer is, “No.” What was significant, in my opinion, was the jargon used to describe the wonders of smart software and its benefits for mankind (themkind?)
Here are some examples:
1,000X AI engineer. The idea with this euphonious catchphrase is that a developer or dev will do so much more than a person coding alone. Imagine a Steve Gibson using AI to create the next SpinRite. That decade of coding shrinks to a mere 30 days!
AI engineering. Yep, a “new” type of engineering. Forget building condos that do not collapse in Florida and social media advertising mechanisms. AI engineering is “new” new I assume.
Cambrian explosion. The idea is that AI is proliferating in the hot house of the modern innovator’s environment. Hey, mollusks survived. The logic is some AI startups will too I assume.
Evals. This is a code word from determining if a model is on point or busy doing an LSD trip with ingested content. The takeaway is that no one has an “eval” for AI models and their outputs’ reliability.
RAG or retrieval augmented generation. The idea is that RAG is a way to make AI model outputs better. Obviously without evals, the RAGs’ value may be difficult to determine, but I am not capturing the jargon to criticize what is the heir to the crypto craziness and its non fungible token thing.
I am enervated. Imagine AI will fix enterprise search, improve Oracle Endeca’s product search, and breathe new life into IBM’s AI dreams.
Stephen E Arnold, October 19, 2023
Recent Googlies: The We-Care-about -Your-Experience Outfit
October 18, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I flipped through some recent items from my newsfeed and noted several about everyone’s favorite online advertising platform. Herewith is my selection for today:
ITEM 1. Boing Boing, “Google Reportedly Blocking Benchmarking Apps on Pixel 8 Phones.” If the mobile devices were fast — what the GenX and younger folks call “performant” (weird word, right?) — wouldn’t the world’s largest online ad service make speed test software and its results widely available? If not, perhaps the mobile devices are digital turtles?
Hey, kids. I just want to be your friend. We can play hide and seek. We can share experiences. You know that I do care about your experiences. Don’t run away, please. I want to be sticky. Thanks, MidJourney, you have a knack for dinosaur art. Boy that creature looks familiar.
ITEM 2. The Next Web, “Google to Pay €3.2M Yearly Fee to German News Publishers.” If Google traffic and its benefits were so wonderful, why would the Google pay publishers? Hmmm.
ITEM 3. The Verge (yep, the green weird logo outfit), “YouTube Is the Latest Large Platform to Face EU Scrutiny Regarding the War in Israel.” Why is the EU so darned concerned about an online advertising company which still sells wonderful Google Glass, expresses much interest in a user’s experience, and some fondness for synthetic data? Trust? Failure to filter certain types of information? A reputation for outstanding business policies?
ITEM 4. Slashdot quoted a document spotted by the Verge (see ITEM 3) which includes this statement: “… Google rejects state and federal attempts at requjiring platforms to verify the age of users.” Google cares about “user experience” too much to fool with administrative and compliance functions.
ITEM 5. The BBC reports in “Google Boss: AI Too Important Not to Get Right.” The tie up between Cambridge University and Google is similar to the link between MIT and IBM. One omission in the fluff piece: No definition of “right.”
ITEM 6. Arstechnica reports that Google has annoyed the estimable New York Times. Google, it seems, is using is legal brigades to do some Fancy Dancing at the antitrust trial. Access to public trial exhibits has been noted. Plus, requests from the New York Times are being ignored. Is the Google above the law? What does “public” mean?
Yep, Google googlies.
Stephen E Arnold, October 18, 2023
The Path to Success for AI Startups? Fancy Dancing? Pivots? Twisted Ankles?
October 17, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I read “AI-Enabled SaaS vs Moatless AI.” The buzzwordy title hides a somewhat grim prediction for startups in the AI game.” Viggy Balagopalakrishnan (I love that name Viggy) explains that the best shot at success is:
…the only real way to build a strong moat is to build a fuller product. A company that is focused on just AI copywriting for marketing will always stand the risk of being competed away by a larger marketing tool, like a marketing cloud or a creative generation tool from a platform like Google/Meta. A company building an AI layer on top of a CRM or helpdesk tool is very likely to be mimicked by an incumbent SaaS company. The way to solve for this is by building a fuller product.
My interpretation of this comment is that small or focused AI solutions will find competing with big outfits difficult. Some may be acquired. A few may come up with a magic formula for money. But most will fail.
How does that moat work when an AI innovator’s construction is attacked by energy weapons discharged from massive death stars patrolling the commercial landscape? Thanks, MidJourney. Pretty complicated pointy things on the castle with a moat.
Viggy does not touch upon the failure of regulatory entities to slow the growth of companies that some allege are monopolies. One example is the Microsoft game play. Another is the somewhat accommodating investigation of the Google with its closed sessions and odd stance on certain documents.
There are other big outfits as well, and the main idea is that the ecosystem is not set up for most AI plays to survive with huge predators dominating the commercial jungle. That means clever scripts, trade secrets, and agility may not be sufficient to ensure survival.
What’s Ziggy think? Here’s an X-ray of his perception:
Given that the infrastructure and platform layers are getting reasonably commoditized, the most value driven from AI-fueled productivity is going to be captured by products at the application layer. Particularly in the enterprise products space, I do think a large amount of the value is going to be captured by incumbent SaaS companies, but I’m optimistic that new fuller products with an AI-forward feature set and consequently a meaningful moat will emerge.
How do moats work when Amazon-, Google-, Microsoft-, and Oracle-type outfits just add AI to their commercial products the way the owner of a Ford Bronco installs a lift kit and roof lights?
Productivity? If that means getting rid of humans, I agree. If the term means to Ziggy smarter and more informed decision making? I am not sure. Moats don’t work in the 21st century. Land mines, surprise attacks, drones, and missiles seem to be more effective. Can small firms deal with the likes of Googzilla, the Bezos bulldozer, and legions of Softies? Maybe. Ziggy is an optimist. I am a realist with a touch of radical empiricism, a tasty combo indeed.
Stephen E Arnold, October 17, 2023
Big, Fat AI Report: Free and Meaty for Marketing Collateral
October 12, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Curious about AI, machine learning, and smart software? You will want to obtain a free (at least as of October 6, 2023) report called “Artificial Intelligence Index Report 2023.” The 386 page PDF contains information selected to make it clear that AI is a big deal. There is no reference to the validity of the research conducted for the document. I find that interesting since the president of Stanford University stepped carefully from the speeding world of academia to find his future elsewhere. Making up data seems to be a signature feature of outfits like Stanford and, of course, Harvard.
A Musk-inspired robot reads a print out of the PDF report. The robot looks … like a robot. Thanks, Microsoft Bing. You do a good robot.
But back to the report.
For those who lack the time and swipe left deflector, an two page summary identifies the big finds from the work. Let me highlight three or 30 percent of the knowledge gems. Please, consult the full report for the other seven discoveries. No blood pressure reduction medicine is needed, but you may want to use the time between plays at an upcoming NFL game to work through the full document.
Three big reveals:
- AI continued to post state-of-the-art results, but year-over-year improvement on many benchmarks continues to be marginal.
- … The number of AI-related job postings has increased on average from 1.7% in 2021 to 1.9% in 2022.
- An AI Index analysis of the legislative records of 127 countries shows that the number of bills containing “artificial intelligence” that were passed into law grew from just 1 in 2016 to 37 in 2022.
My interpretation of these full suite of 10 key points: The hype is stabilizing.
Who funded the project. Not surprisingly the Google and OpenAI kicked in. There is a veritable who is who of luminaries and high-profile research outfits providing some assistance as well. Headhunters will probably want to print out the pages with the names and affiliations of the individuals listed. One never knows where the next Elon Musk lurks.
The report has eight chapters, but the bulk of the information appears in the first four; to wit:
- R&D
- Technical performance
- Technical AI ethics
- The economy.
I want to be up front. I scanned the document. Does it confront issues like the objective of Google and a couple of other firms dominating the AI landscape? Nah. Does it talk about the hallucination and ethical features of smart software? Nah. Does it delve into the legal quagmire which seems to be spreading faster than dilapidated RVs parked on El Camino Real? Nah.
I suggest downloading a copy and checking out the sections which appear germane to your interests in AI. I am happy to have a copy for reference. Marketing collateral from an outfit whose president resigned due to squishy research does not reassure me. Yes, integrity matters to me. Others? Maybe not.
Stephen E Arnold, October 12, 2023
Open Source Companies: Bet on Expandability and Extendibility
October 12, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Naturally, a key factor driving adoption of open source software is a need to save money. However, argues Lago co-founder Anh-Tho Chuong, “Open Source Does Not Win by Being Cheaper” than the competition. Not just that, anyway. She writes:
“What we’ve learned is that open-source tools can’t rely on being an open-source alternative to an already successful business. A developer can’t just imitate a product, tag on an MIT license, and call it a day. As awesome as open source is, in a vacuum, it’s not enough to succeed. … [Open-source companies] either need a concrete reason for why they are open source or have to surpass their competitors.”
One caveat: Chuong notes she is speaking of businesses like hers, not sponsored community projects like React, TypeORM, or VSCode. Outfits that need to turn a profit to succeed must offer more than savings to distinguish themselves, she insists. The post notes two specific problems open-source developers should aim to solve: transparency and extensibility. It is important to many companies to know just how their vendors are handling their data (and that of their clients). With closed software one just has to trust information is secure. The transparency of open-source code allows one verify that it is. The extensibility advantage comes from the passion of community developers for plugins, which are often merged into the open-source main branch. It can be difficult for closed-source engineering teams to compete with the resulting extendibility.
See the write-up for examples of both advantages from the likes of MongoDB, PostHog, and Minio. Chuong concludes:
“Both of the above issues contribute to commercial open-source being a better product in the long run. But by tapping the community for feedback and help, open-source projects can also accelerate past closed-source solutions. … Open-source projects—not just commercial open source—have served as a critical driver for the improvement of products for decades. However, some software is going to remain closed source. It’s just the nature of first-mover advantage. But when transparency and extensibility are an issue, an open-source successor becomes a real threat.”
Cynthia Murrell, October 12, 2023
9 Cognitive Blind Spot 3: You Trust Your Instincts, Right?
October 9, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
ChatGPT became available in the autumn of 2022. By December, a young person fell in love with his chatbot. From this dinobaby’s point of view, that was quicker than a love affair ignited by a dating app. “Treason Case: What Are the Dangers of AI Chatbots?” misses the point of its own reporter’s story. The Beeb puts the blame on Jaswant Singh Chail, not the software. Justice needs an individual, not a pride of zeros and ones.
A bad actor tries to convince other criminals that he is honest, loyal, trustworthy, and an all-around great person. “Trust me,” he says. Some of those listening to the words are skeptical. Thanks, MidJourney. You are getting better at depicting duplicity.
Here’s the story: Shortly after discovering an online chatbot, Mr. Chail fell in love with “an online companion.” The Replika app allows a user to craft a chatbot. The protagonist in this love story promptly moved from casual chit chat to emotional attachment. As the narrative arc unfolded, Mr. Chail confessed that he was an assassin, and he wanted to kill the Queen of England. Mr. Chail planned on using a crossbow.
The article reports:
Marjorie Wallace, founder and chief executive of mental health charity SANE, says the Chail case demonstrates that, for vulnerable people, relying on AI friendships could have disturbing consequences. “The rapid rise of artificial intelligence has a new and concerning impact on people who suffer from depression, delusions, loneliness and other mental health conditions,” she says.
That seems reasonable. The software meshed nicely with the cognitive blind spot of trusting one’s intuition. Some call this “gut” feel. The label is less important in the confusion of software with reality.
But what happens when the new Google Pixel 8 camera enhances an image automatically. Who wants a lousy snap? Google appears to favor a Mother Google approach. When an image is manipulated either in a still or video, what does one’s gut say, “I trust pictures and videos for accuracy.” Like the young would be and off-the-rails chatbot lover, zeros and ones can create some interesting effects.
What about you, gentle reader? Do you know how to recognize an unhealthy interaction with smart software? Can you determine if an image is “real” or the fabrication of a large outfit like Google?
Stephen E Arnold, October 9, 2023