Just for the Financially Irresponsible: Social Shopping
December 7, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Amazon likes to make it as easy as possible for consumers to fork over their hard-earned cash on a whim. More steps between seeing a product and checking out means more time to reconsider a spontaneous purchase, after all. That is why the company has been working to integrate purchases into social media platforms. Payment-platform news site PYMNTS reports on the latest linkage in, “Amazon Extends Social Shopping Efforts with Snapchat Deal.” Amazon’s partnership with Meta had already granted it quick access to eyeballs and wallets at Facebook and Instagram. Now users of all three platforms will be able to link those social media accounts to their Amazon accounts. We are told:
“It’s a partnership that lets both companies play to their strengths: Amazon gets to help merchants find customers who might not have actively sought out their products. And Meta’s discovery-based model lets users receive targeted ads without searching for them. Amazon also has a deal with Pinterest, signed in April, designed to create more shoppable content by enhancing the platform’s offering of relevant products and brands. These partnerships are happening at a moment when social media has become a crucial tool for consumers to find new products.”
That is one way to put it. Here is another: The deals let Amazon take advantage of users’ cognitive haze: scrolling social media has been linked to information overload, shallow thinking, reduced attention span, and fragmented thoughts. A recipe for perfect victims. I mean, customers. We wonder what Meta is getting in exchange for handing them over?
Cynthia Murrell, December 7, 2023
Forget Deep Fakes. Watch for Shallow Fakes
December 6, 2023
This essay is the work of a dumb dinobaby. No smart software required.
“A Tech Conference Listed Fake Speakers for Years: I Accidentally Noticed” revealed a factoid about which I knew absolutely zero. The write up reveals:
For 3 years straight, the DevTernity conference listed non-existent software engineers representing Coinbase and Meta as featured speakers. When were they added and what could have the motivation been?
The article identifies and includes what appear to be “real” pictures of a couple of these made-up speakers. What’s interesting is that only females seem to be made up. Is that perhaps because conference organizers like to take the easiest path, choosing people who are “in the news” or “friends.” In the technology world, I see more entities which appear to be male than appear to be non-males.
Shallow fakes. Deep fakes. What’s the problem? Thanks, MSFT Copilot. Nice art which you achieved exactly how? Oh, don’t answer that question. I don’t want to know.
But since I don’t attend many conferences, I am not in touch with demographics. Furthermore, I am not up to speed on fake people. To be honest, I am not too interested in people, real or fake. After a half century of work, I like my French bulldog.
The write up points out:
We’ve not seen anything of this kind of deceit in tech – a conference inventing speakers, including fake images – and the mainstream media covered this first-of-a-kind unethical approach to organizing a conference,
That’s good news.
I want to offer a handful of thoughts about creating “fake” people for conferences and other business efforts:
- Why not? The practice went unnoticed for years.
- Creating digital “fakes” is getting easier and the tools are becoming more effective at duplicating “reality” (whatever that is). It strikes me that people looking for a short cut for a diverse Board of Directors, speaker line up, or a LinkedIn reference might find the shortest, easiest path to shape reality for a purpose.
- The method used to create a fake speaker is more correctly termed ka “shallow” fake. Why? As the author of the cited paper points out. Disproving the reality of the fakes was easy and took little time.
Let me shift gears. Why would conference organizers find fake speakers appealing? Here are some hypotheses:
- Conferences fall into a “speaker rut”; that is, organizers become familiar with certain speakers and consciously or unconsciously slot them into the next program because they are good speakers (one hopes), friendly, or don’t make unwanted suggestions to the organizers
- Conference staff are overworked and understaffed. Applying some smart workflow magic to organizing and filling in the blanks spaces on the program makes the use of fakery appealing, at least at one conference. Will others learn from this method?
- Conferences have become more dependent on exhibitors. Over the years, renting booth space has become a way for a company to be featured on the program. Yep, advertising, just advertising linked to “sponsors” of social gatherings or Platinum and Gold sponsors who get to put marketing collateral in a cheap nylon bag foisted on every registrant.
I applaud this write up. Not only will it give people ideas about how to use “fakes.” It will also inspire innovation in surprising ways. Why not “fake” consultants on a Zoom call? There’s an idea for you.
Stephen E Arnold, December 6, 2023
AI: Big Ideas Become Money Savers and Cost Cutters
December 6, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Earlier this week (November 28, 2023,) The British newspaper The Guardian published “Sports Illustrated Accused of Publishing Articles Written by AI.” The main idea is that dependence on human writers became the focus of a bunch of bean counters. The magazine has a reasonably high profile among a demographic not focused on discerning the difference between machine output and sleek, intellectual, well groomed New York “real” journalists. Some cared. I didn’t. It’s money ball in the news business.
The day before the Sports Illustrated slick business and PR move, I noted a Murdoch-infused publication’s revelation about smart software. Barron’s published “AI Will Create—and Destroy—Jobs. History Offers a Lesson.” Barron’s wrote about it; Sports Illustrated got snared doing it.
Barron’s said:
That AI technology will come for jobs is certain. The destruction and creation of jobs is a defining characteristic of the Industrial Revolution. Less certain is what kind of new jobs—and how many—will take their place.
Okay, the Industrial Revolution. Exactly how long did that take? What jobs were destroyed? What were the benefits at the beginning, the middle, and end of the Industrial Revolution? What were the downsides of the disruption which unfolded over time? Decades wasn’t it?
The AI “revolution” is perceived to be real. Investors, testosterone-charged venture capitalists, and some Type A students are going to make the AI Revolution a reality. Damn, the regulators, the copyright complainers, and the dinobabies who want to read, think, and write themselves.
Barron’s noted:
A survey conducted by LinkedIn for the World Economic Forum offers hints about where job growth might come from. Of the five fastest-growing job areas between 2018 and 2022, all but one involve people skills: sales and customer engagement; human resources and talent acquisition; marketing and communications; partnerships and alliances. The other: technology and IT. Even the robots will need their human handlers.
I can think of some interesting jobs. Thanks, MSFT Copilot. You did ingest some 19th century illustrations, didn’t you, you digital delight.
Now those are rock solid sources: Microsoft’s LinkedIn and the charming McKinsey & Company. (I think of McKinsey as the opioid innovators, but that’s just my inexplicable predisposition toward an outstanding bastion of ethical behavior.)
My problem with the Sports Illustrated AI move and the Barron’s essay boils down to the bipolarism which surfaces when a new next big thing appears on the horizon. Predicting what will happen when a technology smashes into business billiard balls is fraught with challenges.
One thing is clear: The balls are rolling, and journalists, paralegals, consultants, and some knowledge workers are going to find themselves in the side pocket. The way out might be making TikToks or selling gadgets on eBay.
Some will say, “AI took our jobs, Billy. Now what?” Yes, now what?
Stephen E Arnold, December 6, 2023
Harvard University: Does Money Influence Academic Research?
December 5, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Harvard University has been on my radar since the ethics misstep. In case your memory is fuzzy, Francesca Gino, a big thinker about ethics and taking shortcuts, was accused of data fraud. The story did not attract much attention in rural Kentucky. Ethics and dishonesty? Come on. Harvard has to do some serious training to catch up with a certain university in Louisville. For a reasonable explanation of the allegations (because, of course, one will never know), navigate to “Harvard Professor Who Studies Dishonesty Is Accused of Falsifying Data” and dig in.
Thanks, MSFT Copilot, you have nailed the depressive void that comes about when philosophers learn that ethics suck.
Why am I thinking about Harvard and ethics? The answer is that I read “Harvard Gutted Initial Team Examining Facebook Files Following $500 Million Donation from Chan Zuckerberg Initiative, Whistleblower Aid Client Reveals.” I have no idea if the write up is spot on, weaponized information, or the work of someone who did not get into one of the university’s numerous money generating certification programs.
The write up asserts:
Harvard University dismantled its prestigious team of online disinformation experts after a foundation run by Facebook’s Mark Zuckerberg and his wife Priscilla Chan donated $500 million to the university, a whistleblower disclosure filed by Whistleblower Aid reveals. Dr. Joan Donovan, one of the world’s leading experts on social media disinformation, says she ran into a wall of institutional resistance and eventual termination after she and her team at Harvard’s Technology and Social Change Research Project (TASC) began analyzing thousands of documents exposing Facebook’s knowledge of how the platform has caused significant public harm.
Let’s assume that the allegation is horse feathers, not to be confused with Intel’s fabulous Horse Ridge. Harvard still has to do some fancy dancing with regard to the ethics professor and expert in dishonesty who is alleged to have violated the esteemed university’s ethics guidelines and was dishonest.
If we assume that the information in Dr. Donovan’s whistleblower declaration is close enough for horse shoes, something equine can be sniffed in the atmosphere of Dr. William James’s beloved institution.
What could Facebook or the Metazuck do which would cause significant public harm? The options range from providing tools to disseminate information which spark body shaming, self harm, and angst among young users. Are old timers possibly affected? I suppose buying interesting merchandise on Facebook Marketplace and experiencing psychological problems as a result of defriending are possibilities too.
If the allegations are proven to be accurate, what are the consequences for the two esteemed organizations? My hunch is zero. Money talks; prestige walks away to put ethics on display for another day.
Stephen E Arnold, December 5, 2023
Why Google Dorks Exist and Why Most Users Do Not Know Why They Are Needed
December 4, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Many people in my lectures are not familiar with the concept of “dorks”. No, not the human variety. I am referencing the concept of a “Google dork.” If you do a quick search using Yandex.com, you will get pointers to different “Google dorks.” Click on one of the links and you will find information you can use to retrieve more precise and relevant information from the Google ad-supported Web search system.
Here’s what QDORKS.com looks like:
The idea is that one plugs in search terms and uses the pull down boxes to enter specific commands to point the ad-centric system at something more closely resembling a relevant result. Other interfaces are available; for example, the “1000 Best Google Dorks List." You get a laundry list of tips,commands, and ideas for wrestling Googzilla to the ground, twisting its tail, and (hopefully) yield relevant information. Hopefully. Good work.
Most people are lousy at pinning the tail on the relevance donkey. Therefore, let someone who knows define relevance for the happy people. Thanks, MSFT Copilot. Nice animal with map pins.
Why are Google Dorks or similar guides to Google search necessary? Here are three reasons:
- Precision reduces the opportunities for displaying allegedly relevant advertising. Semantic relaxation allows the Google to suggest that it is using Oingo type methods to find mathematically determined relationships. The idea is that razzle dazzle makes ad blasting something like an ugly baby wrapped in translucent fabric on a foggy day look really great.
- When Larry Page argued with me at a search engine meeting about truncation, he displayed a preconceived notion about how search should work for those not at Google or attending a specialist conference about search. Rational? To him, yep. Logical? To his framing of the search problem, the stance makes perfect sense if one discards the notion of tense, plurals, inflections, and stupid markers like “im” as in “impractical” and “non” as in “nonsense.” Hey, Larry had the answer. Live with it.
- The goal at the Google is to make search as intellectually easy for the “user” as possible. The idea was to suggest what the user intended. Also, Google had the old idea that a person’s past behavior can predict that person’s behavior now. Well, predict in the sense that “good enough” will do the job for vast majority of search-blind users who look for the short cut or the most convenient way to get information.
Why? Control, being clever, and then selling the dream of clicks for advertisers. Over the years, Google leveraged its information framing power to a position of control. I want to point out that most people, including many Googlers, cannot perceive. When pointed out, those individuals refuse to believe that Google does [a] NOT index the full universe of digital data, [b] NOT want to fool around with users who prefer Boolean algebra, content curation to identify the best or most useful content, and [c] fiddle around with training people to become effective searchers of online information. Obfuscation, verbal legerdemain, and the “do no evil” craziness make the railroad run the way Cornelius Vanderbilt-types implemented.
I read this morning (December 4, 2023) the Google blog post called “New Ways to Find Just What You Need on Search.” The main point of the write up in my opinion is:
Search will never be a solved problem; it continues to evolve and improve alongside our world and the web.
I agree, but it would be great if the known search and retrieval functions were available to users. Instead, we have a weird Google Mom approach. From the write up:
To help you more easily keep up with searches or topics you come back to a lot, or want to learn more about, we’re introducing the ability to follow exactly what you’re interested in.
Okay, user tracking, stored queries, and alerts. How does the Google know what you want? The answer is that users log in, use Google services, and enter queries which are automatically converted to search. You will have answers to questions you really care about.
There are other search functions available in the most recent version of Google’s attempts to deal with an unsolved problem:
As with all information on Search, our systems will look to show the most helpful, relevant and reliable information possible when you follow a topic.
Yep, Google is a helicopter parent. Mom will know what’s best, select it, and present it. Don’t like it? Mom will be recalcitrant, like shaping search results to meet what the probabilistic system says, “Take your medicine, you brat.” Who said, “Mother Google is a nice mom”? Definitely not me.
And Google will make search more social. Shades of Dr. Alon Halevy and the heirs of Orkut. The Google wants to bring people together. Social signals make sense to Google. Yep, content without Google ads must be conquered. Let’s hope the Google incentive plans encourage the behavior, or those valiant programmers will be bystanders to other Googlers’ promotions and accompanying money deliveries.
Net net: Finding relevant, on point, accurate information is more difficult today than at any other point in the 50+ year work career. How does the cloud of unknowing dissipate? I have no idea. I think it has moved in on tiny Googzilla feet and sits looking over the harbor, ready to pounce on any creature that challenges the status quo.
PS. Corny Vanderbilt was an amateur compared to the Google. He did trains; Google does information.
Stephen E Arnold, December 4, 2023
Good Fences, Right, YouTube? And Good Fences in Winter Even Better
December 4, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Remember that line from the grumpy American poet Bobby Frost. (I have on good authority that Bobby was not a charmer. And who, pray tell, was my source. How about a friend of the poet’s who worked with him in South Shaftsbury.)
Like those in the Nor’East say, “Good fences make good neighbors.”
The line is not original. Bobby’s pal told me that the saying was a “pretty common one” among the Shaftsburians. Bobby appropriated the line in his poem “Mending Wall. (It is loved by millions of high school students). The main point of the poem is that “Something there is that doesn’t love a wall.” The key is “something.”
The fine and judicious, customer centric, and well-managed outfit Google is now in the process of understanding the “something that doesn’t love a wall,” digital or stone.
“Inside the Arms Race between YouTube and Ad Blockers” updates the effort of the estimable advertising outfit and — well — almost everyone. The article explains:
YouTube recently took dramatic action against anyone visiting its site with an ad blocker running — after a few pieces of content, it’ll simply stop serving you videos. If you want to get past the wall, that ad blocker will (probably) need to be turned off; and if you want an ad-free experience, better cough up a couple bucks for a Premium subscription.
The write up carefully explains that one must pay a “starting” monthly fee of $13.99 to avoid the highly relevant advertisements for metal men’s wallets, the total home gym which seems only inappropriate for a 79 year old dinobaby like me, and some type of women’s undergarment. Yeah, that ad matching to a known user is doing a bang up job in my opinion. I bet the Skim’s marketing manager is thrilled I am getting their message. How many packs of Skims do I buy in a lifetime? Zero. Yep, zero.
Yes, sir. Good fences make good neighbors. Good enough, MSFT Copilot. Good enough.
Okay, that’s the ad blocker thing, which I have identified as Google’s digital Battle of Waterloo in honor of a movie about everyone’s favorite French emperor, Nappy B.
But what the cited write up and most of the coverage is not focusing on is the question, “Why the user hostile move?” I want to share some of my team’s ideas about the motive force behind this disliked and quite annoying move by that company everyone loves (including the Skim’s marketing manager?).
First, the emergence of ChatGPT type services is having a growing impact on Google’s online advertising business. One can grind though Google’s financials and not find any specific item that says, “The Duke of Wellington and a crazy old Prussian are gearing up for a fight. So I will share some information we have rounded up by talking to people and looking through the data gathered about Googzilla. Specifically, users want information packaged to answer or to “appear” to answer their question. Some want lists; some want summaries; and some just want to avoid the enter the query, click through mostly irrelevant results, scan for something that is sort of close to an answer, and use that information to buy a ticket or get a Taylor Swift poster, whatever. That means that the broad trend in the usage of Google search is a bit like the town of Grindavik, Iceland. “Something” is going on, and it is unlikely to bode well for the future that charming town in Iceland. That’s the “something” that is hostile to walls. Some forces are tough to resist even by Googzilla and friends.
Second, despite the robust usage of YouTube, it costs more money to operate that service than it does to display from a cache ads and previously spidered information from Google compliant Web sites. Thus, as pressure on traditional search goes up from the ChatGPT type services, the darker the clouds on the search business horizon look. The big storm is not pelting the Googleplex yet, but it does looks ominous perched on the horizon and moving slowly. Don’t get our point wrong: Running a Google scale search business is expensive, but it has been engineered and tuned to deliver a tsunami of cash. The YouTube thing just costs more and is going to have a tough time replacing lost old-fashioned search revenue. What’s a pressured Googzilla going to do? One answer is, “Charge users.” Then raise prices. Gee, that’s the much-loved cable model, isn’t it? And the pressure point is motivating some users who are developers to find ways to cut holes in the YouTube fence. The fix? Make the fence bigger and more durable? Isn’t that a Rand arms race scenario? What’s an option? Where’s a J. Robert Oppenheimer-type when one needs him?
The third problem is that there is a desire on the part of advertisers to have their messages displayed in a non offensive context. Also, advertisers — because the economy for some outfits sucks — now are starting to demand proof that their ads are being displayed in front of buyers known to have an interest in their product. Yep, I am talking about the Skims’ marketing officer as well as any intermediary hosing money into Google advertising. I don’t want to try to convince those who are writing checks to the Google the following: “Absolutely. Your ad dollars are building your brand. You are getting leads. You are able to reach buyers no other outfit can deliver.” Want proof. Just look at this dinobaby. I am not buying health food, hidden carry holsters, and those really cute flesh colored women’s undergarments. The question is, “Are the ads just being dumped or are they actually targeted to someone who is interested in a product category?” Good question, right?
Net net: The YouTube ad blocking is shaping up to be a Google moment. Now Google has sparked an adversarial escalation in the world of YouTube ad blockers. What are Google’s options now that Googzilla is backed into a corner? Maybe Bobby Frost has a poem about it: “Some say the world will end in fire, Some say in ice.” How do Googzilla fare in the ice?
Stephen E Arnold, December 4, 2023
The RAG Snag: Convenience May Undermine Thinking for Millions
December 4, 2023
This essay is the work of a dumb dinobaby. No smart software required.
My understanding is that everyone who is informed about AI knows about RAG. The acronym means Retrieval Augmented Generation. One explanation of RAG appears in the nVidia blog in the essay “What Is Retrieval Augmented Generation aka RAG.” nVidia, in my opinion, loves whatever drives demand for its products.
The idea is that machine processes to minimize errors in output. The write up states:
Retrieval-augmented generation (RAG) is a technique for enhancing the accuracy and reliability of generative AI models with facts fetched from external sources.
Simplifying the idea, RAG methods gather information and perform reference checks. The checks can be performed by consulting other smart software, the Web, or knowledge bases like an engineering database. nVidia provides a “reference architecture,” which obviously relies on nVidia products.
The write up does an obligatory tour of a couple of search and retrieval systems. Why? Most of the trendy smart software are demonstrations of information access methods wrapped up in tools that think for the “searcher” or person requiring output to answer a question, explain a concept, or help the human to think clearly. (In dinosaur days, the software is performing functions once associated with a special librarian or an informed colleague who could ask questions and conduct a reference interview. I hope the dusty concepts did not make you sneeze.)
“Yes, young man. The idea of using multiples sources can result in learning. We now call this RAG, not research.” The young man, stunned with the insight say, “WTF?” Thanks, MSFT Copilot. I love the dual tassels. The young expert is obviously twice as intelligent as the somewhat more experienced dinobaby with the weird fingers.
The article includes a diagram which I found difficult to read. I think the simple blocks represent the way in which smart software obviates the need for the user to know much about sources, verification, or provenance about the sources used to provide information. Furthermore, the diagram makes the entire process look just like getting the location of a pizza restaurant from an iPhone (no Google Maps for me).
The highlight of the write up are the links within the article. An interested reader can follow the links for additional information.
Several observations:
- The emergence of RAG as a replacement for such concepts as “search”, “special librarian,” and “provenance” makes clear that finding information is a problem not solved for systems, software, and people. New words make the “old” problem appear “new” again.
- The push for recursive methods to figure out what’s “right” or “correct” will regress to the mean; that is, despite the mathiness of the methods, systems will deliver “acceptable” or “average” outputs. A person who thinks that software will impart genius to a user are believing in a dream. These individuals will not be living the dream.
- widespread use of smart software and automation means that for most people, critical thinking will become the equivalent of an appendix. Instead of mother knows best, the system will provide the framing, the context, and the implication that the outputs are correct.
RAG opens new doors for those who operate widely adopted smart software systems will have significant control over what people think and, thus, do. If the iPhone shows a pizza joint, what about other pizza joints? Just ask. The system will not show pizza joints not verified in some way. If that “verification” requires the company advertising to be in the data set, well, that’s the set of pizza joints one will see. The others? Invisible, not on the radar, and doomed to failure seem their fate.
RAG is significant because it is new speak and it marks a disassociation of “knowing” from “accepting” output information as the best and final words on a topic. I want to point out that for a small percentage of humans, their superior cognitive abilities will ensure a different trajectory. The “new elite” will become the individuals who design, shape, control, and deploy these “smart” systems.
Most people will think they are informed because they can obtain information from a device. The mass of humanity will not know how information control influences their understanding and behavior. Am I correct? I don’t know. I do know one thing: This dinobaby prefers to do knowledge acquisition the old fashioned, slow, inefficient, and uncontrolled way.
Stephen E Arnold, December 4, 2023
Health Care and Steerable AI
December 4, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Large language models are powerful tools that can be used for the betterment of humanity. Or, in the hands of for-profit entities, to get away with wringing every last penny out of a system in the most opaque and intractable ways possible. When that system manages the wellbeing of millions and millions of people, the fallout can be tragic. TechDirt charges, “’AI’ Is Supercharging our Broken Healthcare System’s Worst Tendencies.”
Reporter Karl Bode begins by illustrating the bad blend of corporate greed and AI with journalism as an example. Media companies, he writes, were so eager to cut corners and dodge unionized labor they adopted AI technology before it was ready. In that case the results were “plagiarism, bull[pucky], a lower quality product, and chaos.” Those are bad. Mistakes in healthcare are worse. We learn:
“Not to be outdone, the very broken U.S. healthcare industry is similarly trying to layer half-baked AI systems on top of a very broken system. Except here, human lives are at stake. For example UnitedHealthcare, the largest health insurance company in the US, has been using AI to determine whether elderly patients should be cut off from Medicare benefits. If you’ve ever navigated this system on behalf of an elderly loved one, you likely know what a preposterously heartless [poop]whistle this whole system already is long before automation gets involved. But a recent investigation by STAT showed the AI consistently made major errors and cut elderly folks off from needed care prematurely, with little recourse by patients or families. … A recent lawsuit filed in the US District Court for the District of Minnesota alleges that the AI in question was reversed by human review roughly 90 percent of the time.”
And yet, employees were ordered to follow the algorithm’s decisions no matter their inanity. For the few patients who did win hard-fought reversals, those decisions were immediately followed by fresh rejections that kicked them back to square one. Bode writes:
“The company in question insists that the AI’s rulings are only used as a guide. But it seems pretty apparent that, as in most early applications of LLMs, the systems are primarily viewed by executives as a quick and easy way to cut costs and automate systems already rife with problems, frustrated consumers, and underpaid and overtaxed support employees.”
But is there hope these trends will be eventually curtailed? Well, no. The write-up concludes by scoffing at the idea that government regulations or class action lawsuits are any match for corporate greed. Sounds about right.
Cynthia Murrell, December 4, 2023
AI Adolescence Ascendance: AI-iiiiii!
December 1, 2023
This essay is the work of a dumb dinobaby. No smart software required.
The monkey business of smart software has revealed its inner core. The cute high school essays and the comments about how to do search engine optimization are based on the fundamental elements of money, power, and what I call ego-tanium. When these fundamental elements go critical, exciting things happen. I know this this assertion is correct because I read “The AI Doomers Have Lost This Battle”, an essay which appears in the weird orange newspaper The Financial Times.
The British bastion of practical financial information says:
It would be easy to say that this chaos showed that both OpenAI’s board and its curious subdivided non-profit and for-profit structure were not fit for purpose. One could also suggest that the external board members did not have the appropriate background or experience to oversee a $90bn company that has been setting the agenda for a hugely important technology breakthrough.
In my lingo, the orange newspaper is pointing out that a high school science club management style is like a burning electric vehicle. Once ignited, the message is, “Stand back, folks. Let it burn.”
“Isn’t this great?” asks the driver. The passenger, a former Doomsayer, replies, “AIiiiiiiiiii.” Thanks MidJourney, another good enough illustration which I am supposed to be able to determine contains copyrighted material. Exactly how? may I ask. Oh, you don’t know.
The FT picks up a big-picture idea; that is, smart software can become a problem for humanity. That’s interesting because the book “Weapons of Math Destruction” did a good job of explaining why algorithms can go off the rails. But the FT’s essay embraces the idea of software as the Terminator with the enthusiasm of the crazy old-time guy who shouted “Eureka.”
I note this passage:
Unfortunately for the “doomers”, the events of the last week have sped everything up. One of the now resigned board members was quoted as saying that shutting down OpenAI would be consistent with the mission (better safe than sorry). But the hundreds of companies that were building on OpenAI’s application programming interfaces are scrambling for alternatives, both from its commercial competitors and from the growing wave of open-source projects that aren’t controlled by anyone. AI will now move faster and be more dispersed and less controlled. Failed coups often accelerate the thing that they were trying to prevent.
Okay, the yip yap about slowing down smart software is officially wrong. I am not sure about the government committees’ and their white papers about artificial intelligence. Perhaps the documents can be printed out and used to heat the camp sites of knowledge workers who find themselves out of work.
I find it amusing that some of the governments worried about smart software are involved in autonomous weapons. The idea of a drone with access to a facial recognition component can pick out a target and then explode over the person’s head is an interesting one.
Is there a connection between the high school antics of OpenAI, the hand-wringing about smart software, and the diffusion of decider systems? Yes, and the relationship is one of those hockey stick curves so loved by MBAs from prestigious US universities. (Non reproducibility and a fondness for Jeffrey Epstein-type donors is normative behavior.)
Those who want to cash in on the next Big Thing are officially in the 2023 equivalent of the California gold rush. Unlike the FT, I had no doubt about the ascendance of the go-fast approach to technological innovation. Technologies, even lousy ones, are like gerbils. Start with a two or three and pretty so there are lots of gerbils.
Will the AI gerbils and the progeny be good or bad. Because they are based on the essential elements of life — money, power, and ego-tanium — the outlook is … exciting. I am glad I am a dinobaby. Too bad about the Doomers, who are regrouping to try and build shield around the most powerful elements now emitting excited particles. The glint in the eyes of Microsoft executives and some venture firms are the traces of high-energy AI emissions in the innovators’ aqueous humor.
Stephen E Arnold, December 1, 2023
Google and X: Shall We Again Love These Bad Dogs?
November 30, 2023
This essay is the work of a dumb dinobaby. No smart software required.
Two stories popped out of my blah newsfeed this morning (Thursday, November 30, 2023). I want to highlight each and offer a handful of observations. Why? I am a dinobaby, and I remember the adults who influenced me telling me to behave, use common sense, and follow the rules of “good” behavior. Dull? Yes. A license to cut corners and do crazy stuff? No.
The first story, if it is indeed accurate, is startling. “Google Caught Placing Big-Brand Ads on Hardcore Porn Sites, Report Says” includes a number of statements about the Google which make me uncomfortable. For instance:
advertisers who feel there’s no way to truly know if Google is meeting their brand safety standards are demanding more transparency from Google. Ideally, moving forward, they’d like access to data confirming where exactly their search ads have been displayed.
Where are big brand ads allegedly appearing? How about “undesirable sites.” What comes to mind for me is adult content. There are some quite sporty ads on certain sites that would make a Methodist Sunday school teacher blush.
These two big dogs are having a heck of a time ruining the living room sofa. Neither dog knows that the family will not be happy. These are dogs, not the mental heirs of Immanuel Kant. Thanks, MSFT Copilot. The stuffing looks like soap bubbles, but you are “good enough,” the benchmark for excellence today.
But the shocking factoid is that Google does not provide a way for advertisers to know where their ads have been displayed. Also, there is a possibility that Google shared ad revenue with entities which may be hostile to the interests of the US. Let’s hope that the assertions reported in the article are inaccurate. But if the display of big brand ads on sites with content which could conceivably erode brand value, what exactly is Google’s system doing? I will return to this question in the observations section of this essay.
The second article is equally shocking to me.
“Elon Musk Tells Advertisers: ‘Go F*** Yourself’” reports that the EV and rocket man with a big hole digging machine allegedly said about advertisers who purchase promotions on X.com (Twitter?):
Don’t advertise,” … “If somebody is going to try to blackmail me with advertising, blackmail me with money, go fuck yourself. Go f*** yourself. Is that clear? I hope it is.” … ” If advertisers don’t return, Musk said, “what this advertising boycott is gonna do is it’s gonna kill the company.”
The cited story concludes with this statement:
The full interview was meandering and at times devolved into stream of consciousness responses; Musk spoke for triple the time most other interviewees did. But the questions around Musk’s own actions, and the resulting advertiser exodus — the things that could materially impact X — seemed to garner the most nonchalant answers. He doesn’t seem to care.
Two stories. Two large and successful companies. What can a person like myself conclude, recognizing that there is a possibility that both stories may have some gaps and flaws:
- There is a disdain for old-fashioned “values” related to acceptable business practices
- The thread of pornography and foul language runs through the reports. The notion of well-crafted statements and behaviors is not part of the Google and X game plan in my view
- The indifference of the senior managers at both companies seeps through the descriptions of how Google and X operate strikes me as intentional.
Now why?
I think that both companies are pushing the edge of business behavior. Google obviously is distributing ad inventory anywhere it can to try and create a market for more ads. Instead of telling advertisers where their ads are displayed or giving an advertiser control over where ads should appear, Google just displays the ads. The staggering irrelevance of the ads I see when I view a YouTube video is evidence that Google knows zero about me despite my being logged in and using some Google services. I don’t need feminine undergarments, concealed weapons products, or bogus health products.
With X.com the dismissive attitude of the firm’s senior management reeks of disdain. Why would someone advertise on a system which promotes behaviors that are detrimental to one’s mental set up?
The two companies are different, but in a way they are similar in their approach to users, customers, and advertisers. Something has gone off the rails in my opinion at both companies. It is generally a good idea to avoid riding trains which are known to run on bad tracks, ignore safety signals, and demonstrate remarkably questionable behavior.
What if the write ups are incorrect? Wow, both companies are paragons. What if both write ups are dead accurate? Wow, wow, the big dogs are tearing up the living room sofa. More than “bad dog” is needed to repair the furniture for living.
Stephen E Arnold, November 30, 2023