A New Fear: Riding the Gradient Descent to Unemployment
September 8, 2023
Is AI poised to replace living, breathing workers? A business professor from Harvard (the ethics hot spot) reassures us (sort of), “AI Won’t Replace Humans—But Humans with AI Will Replace Humans Without AI.” Harvard Business Review‘s Adi Ignatius interviewed AI scholar Karim Lakhani, who insists AI is a transformational technology on par with the Web browser. Companies and workers in all fields, he asserts, must catch up then keep up or risk being left behind. The professor states:
“This transition is really inevitable. And for the folks that are behind, the good news is that the cost to make the transition keeps getting lower and lower. The playbook for this is now well-known. And finally, the real challenge is not a technological challenge. I would say that’s like a 30% challenge. The real challenge is 70%, which is an organizational challenge. My great colleague Tsedal Neeley talks about the digital mindset. Every executive, every worker needs to have a digital mindset, which means understanding how these technologies work, but also understanding the deployment of them and then the change processes you need to do in terms of your organization to make use of them.”
Later, he advises:
“The first step is to begin, start experimentation, create the sandboxes, run internal bootcamps, and don’t just run bootcamps for technology workers, run bootcamps for everybody. Give them access to tools, figure out what use cases they develop, and then use that as a basis to rank and stack them and put them into play.”
Many of those use cases will be predictable. Many more will be unforeseen. One thing we can anticipate is this: users will rapidly acclimate to technologies that make their lives easier. Already, Lakhani notes, customer expectations have been set by AI-empowered big tech. People expect their Uber to show up within minutes and whisk them away or for an Amazon transaction dispute to be resolved instantly. Younger customers have less and less patience for businesses that operate in slower, antiquated ways. Will companies small, medium, and large have to embrace AI or risk becoming obsolete?
Cynthia Murrell, September 8, 2023
We Are from a Big Outfit. We Are Here to Help You. No, Really.
September 7, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
“Greetings, creators,” says the sincere (if smarmy voice of the Google things). “We are here to help you.”
Better listen up. The punishment may range from becoming hard to find (what’s new?) or loss of revenue.
The cheerful young and well paid professional, smiles at the creator and says, “Good morning, I am from a certain alleged monopoly. I am definitely here to help you.” Thanks, MidJourney. The gradient descent is allowing your coefficient of friction to be reduced.
I read “YouTube Advertising Formats.” I love the lack of a date on the write up. Metadata are often helpful. I like document version numbers too. As a dinobaby, I like the name of a person who allegedly wrote the article; for example, Mr. Nadella signs his blog posts about the future of the universe.
The write up makes one big point in my opinion: Creators lose control over ads shown before, during, and after their content is pushed to a user of YouTube and whatever other media the new, improved “smart” Google will offer its “users.”
Here’s how the Google makes sure a creator spots the important “fewer controls” message:
I love those little triangles and the white exclamation points. Very cool.
Why is this change taking place at this time? Here are my thoughts:
- Users of YouTube are not signing up for ad-free YouTube. The change makes it possible for Google to hose more “relevant” ads into the creators’ content.
- Users of YouTube are clicking the “skip” button far too frequently. What’s the fix? You cannot skip so much, pal.
- Google is indeed concerned about ad revenue flow. Despite the happy talk about Google’s revenue, the push to smart software has sparked an appetite for computation. The simple rule is: More compute means more costs.
Is there a fix? Sure, but those adjustments require cash to fund an administrative infrastructure and time to figure out how to leverage options like TikTok and the Zuckbook. Who has time and money? Perhaps a small percentage of creators?
Net net: In an unregulated environment and with powerless “creators,” the Google is here to help itself and maybe some others not so much.
Stephen E Arnold, September 7, 2023
Vaporware: It Costs Little and May Pay Off Big
September 6, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Since ChatGPT and assorted AI image-creation tools burst onto the scene, it seems generative AI is all anyone in the tech world can talk about. Some AI companies have been valued in the billions by those who expect trillion-dollar markets. But, asks Gary Marcus of Marcus on AI, “What if Generative AI Turned Out To Be a Dud?”
Might it be the industry has leapt before looking? Marcus points out generative AI revenues are estimated in just the hundreds of millions so far. He describes reasons the field may never satisfy expectations, like pervasive bias, that pesky hallucination problem, and the mediocrity of algorithmic prose. He also notes people seem to be confusing generative AI with theoretical Artificial General Intelligence (AGI), which is actually much further from being realized. See the write-up for those details.
As disastrous as unrealized AI dreams may be for investors, Marcus is more concerned about policy decisions being made on pure speculation. He writes:
“On the global front, the Biden administration has both limited access to high-end hardware chips that are (currently) essential for generative AI, and limited investment in China; China’s not exactly being warm towards global cooperation either. Tensions are extremely high, and a lot of it to revolve around dreams about who might ‘win the AI war.’ But what if it the winner was nobody, at least not any time soon?”
On the national level, Marcus observes, important efforts to protect consumers from bias, misinformation, and privacy violations are being hampered by a perceived need to develop the technology as soon as possible. The post continues:
“We might not get the consumer protections we need, because we are trying to foster something that may not grow as expected. I am not saying anyone’s particular policies are wrong, but if the premise that generative AI is going to be bigger than fire and electricity turns out to be mistaken, or at least doesn’t bear out in the next decade, it’s certainly possible that we could wind up with what in hindsight is a lot of needless extra tension with China, possibly even a war in Taiwan, over a mirage, along with a social-media level fiasco in which consumers are exploited in news, and misinformation rules the day because governments were afraid to clamp down hard enough.”
Terrific.
Cynthia Murrell, September 6, 2023
Meta Play Tactic or Pop Up a Level. Heh Heh Heh
September 4, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Years ago I worked at a blue chip consulting firm. One of the people I interacted with had a rhetorical ploy when cornered in a discussion. The wizard would say, “Let’s pop up a level.” He would then shift the issue which was, for example, a specific problem, into a higher level concept and bring his solution into the bigger context.
The clever manager pop up a level to observe the lower level tasks from a broader view. Thank Mother MJ. Not what I specified, but the gradient descent is alive and well.
Let’s imagine that the topic is a plain donut or a chocolate covered donut with sprinkles. There are six people in the meeting. The discussion is contentious because that’s what blue chip consulting Type As do: Contention, sometime nice, sometimes not. The “pop up a level” guy says, “Let pop up a level. The plain donut has less sugar. We are concerned about everyone’s health, right? The plain donut does not have so much evil, diabetes linked sugar. It makes sense to just think of health and obviously the decreased risk for increasing the premiums for health insurance.” Unless one is paying attention and not eating the chocolate chip cookies provided for the meeting attendees, the pop-up-a-level approach might work.
A current example of pop-up-a-level thinking, navigate to “Designing Deep Networks to Process Other Deep Networks.” Nvidia is in hog heaven with the smart software boom. The company realizes that there are lots of people getting in the game. The number of smart software systems and methods, products and services, grifts and gambles, and heaven knows what else is increasing. Nvidia wants to remain the Big Dog even though some outfits wants to design their own chips or be like Apple and maybe find a way to do the Arm thing. Enter the pop-up-a-level play.
The write up says:
The solution is to use convolutional neural networks. They are designed in a way that is largely “blind” to the shifting of an image and, as a result, can generalize to new shifts that were not observed during training…. Our main goal is to identify simple yet effective equivariant layers for the weight-space symmetries defined above. Unfortunately, characterizing spaces of general equivariant functions can be challenging. As with some previous studies (such as Deep Models of Interactions Across Sets), we aim to characterize the space of all linear equivariant layers.
Translation: Our system and method can make use of any other accessible smart software plumbing. Stick with Nvidia.
I think the pop-up-a-level approach is a useful one. Are the competitors savvy enough to counter the argument?
Stephen E Arnold, September 4, 2023
Planning Ahead: Microsoft User Agreement Updates To Include New AI Stipulations
September 4, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Microsoft is eager to capitalize on its AI projects, but first it must make sure users are legally prohibited from poking around behind the scenes. For good measure, it will also ensure users take the blame if they misuse its AI tools. “Microsoft Limits Use of AI Services in Upcoming Services Agreement Update,” reports Ghacks.net. Writer Martin Brinkman notes these services include but are not limited to Bing Chat, Windows Copilot, Microsoft Security Copilot, Azure AI platform, and Teams Premium. We learn:
“Microsoft lists five rules regarding AI Services in the section. The rules prohibit certain activity, explain the use of user content and define responsibilities. The first three rules limit or prohibit certain activity. Users of Microsoft AI Services may not attempt to reverse engineer the services to explore components or rulesets. Microsoft prohibits furthermore that users extract data from AI services and the use of data from Microsoft’s AI Services to train other AI services. … The remaining two rules handle the use of user content and responsibility for third-party claims. Microsoft notes in the fourth entry that it will process and store user input and the output of its AI service to monitor and/or prevent ‘abusive or harmful uses or outputs.’ Users of AI Services are also solely responsible regarding third-party claims, for instance regarding copyright claims.”
Another, non-AI related change is that storage for one’s Outlook.com attachments will soon affect OneDrive storage quotas. That could be an unpleasant surprise for many when changes take effect on September 30. Curious readers can see a summary of the changes here, on Microsoft’s website.
Cynthia Murrell, September 4, 2023
Google: Another Modest Proposal to Solve an Existential Crisis. No Big Deal, Right?
September 1, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I am fascinated with corporate “do goodism.” Many people find themselves in an existential crisis anchored in zeros and ones. Is the essay submitted as original work the product of an industrious 15 year old? Or, is the essay the 10 second output of a smart software system like ChatGPT or You.com? Is that brilliant illustration the labor of a dedicated 22 year old laboring in a cockroach infested garage in in Corona, Queens? Or, was the art used in this essay output in about 60 seconds by my trusted graphic companion Mother MidJourney?
“I see the watermark. This is a fake!” exclaims the precocious lad. This clever middle school student has identified the super secret hidden clue that this priceless image is indeed a fabulous fake. How could a young person detect such a sophisticated and subtle watermark? The is, “Let’s overestimate our capabilities and underestimate those of young people who are skilled navigators of the digital world?”
Queens and what’s this “modest proposal” angle. Jonathan Swift beat this horse until it died in the late 17th century. I think the reference makes a bit of sense. Mr. Swift proposed simple solutions to big problems. “DeepMind Develops Watermark to Identify AI Images” explains:
Google’s DeepMind is trialling [sic] a digital watermark that would allow computers to spot images made by artificial intelligence (AI), as a means to fight disinformation. The tool, named SynthID, will embed changes to individual pixels in images, creating a watermark that can be identified by computers but remains invisible to the human eye. Nonetheless, DeepMind has warned that the tool is not “foolproof against extreme image manipulation.
Righto, it’s good enough. Plus, the affable crew at Alphabet Google YouTube are in an ideal position to monitor just about any tiny digital thing in the interwebs. Such a prized position as de facto ruler of the digital world makes it easy to flag and remove offending digital content with the itty bitty teenie weeny manipulated pixel thingy.
Let’s assume that everyone, including the young fake spotter in the Mother MJ image accompany this essay gets to become the de facto certifier of digital content. What are the downsides?
Gee, I give up. I cannot think of one thing that suggests Google’s becoming the chokepoint for what’s in bounds and what’s out of bounds. Everyone will be happy. Happy is good in our stressed out world.
And think of the upsides? A bug might derail some creative work? A system crash might nuke important records about a guilty party because pixels don’t lie? Well, maybe just a little bit. The Google intern given the thankless task of optimizing image analysis might stick in an unwanted instruction. So what? The issue will be resolved in a court, and these legal proceedings are super efficient and super reliable.
I find it interesting that the article does not see any problem with the Googley approach. Like the Oxford research which depended upon Facebook data, the truth is the truth. No problem. Gatekeepers and certification authority are exciting business concepts.
Stephen E Arnold, September 1, 2023
YouTube Content: Are There Dark Rabbit Holes in Which Evil Lurks? Come On Now!
September 1, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
Google has become a cultural touchstone. The most recent evidence is a bit of moral outrage in Popular Science. Now the venerable magazine is PopSci.com, and the Google has irritated the technology explaining staff. Navigate to “YouTube’s Extremist Rabbit Holes Are Deep But Narrow.”
Google, your algorithm is creating rabbit holes. Yes, that is a technical term,” says the PopSci technology expert. Thanks for a C+ image MidJourney.
The write up asserts:
… exposure to extremist and antagonistic content was largely focused on a much smaller subset of already predisposed users. Still, the team argues the platform “continues to play a key role in facilitating exposure to content from alternative and extremist channels among dedicated audiences.” Not only that, but engagement with this content still results in advertising profits.
I think the link with popular science is the “algorithm.” But the write up seems to be more a see-Google-is-bad essay. Science? No. Popular? Maybe?
The essay concludes with this statement:
While continued work on YouTube’s recommendation system is vital and admirable, the study’s researchers echoed that, “even low levels of algorithmic amplification can have damaging consequences when extrapolated over YouTube’s vast user base and across time.” Approximately 247 million Americans regularly use the platform, according to recent reports. YouTube representatives did not respond to PopSci at the time of writing.
I find the use of the word “admirable” interesting. Also, I like the assertion that algorithms can do damage. I recall seeing a report that explained social media is good and another study pitching the idea that bad digital content does not have a big impact. Sure, I believe these studies, just not too much.
Google has a number of buns in the oven. The firm’s approach to YouTube appears to be “emulate Elon.” Content moderation will be something with a lower priority than keeping tabs on Googlers who don’t come to the office or do much Google work. My suggestion for Popular Science is to do a bit more science, and a little less quasi-MBA type writing.
Stephen E Arnold, September 1, 2023
Slackers, Rejoice: Google Has a Great Idea Just for You
August 31, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I want to keep this short because the idea of not doing work to do work offends me deeply. Just like the big thinkers who want people to relax, take time, smell the roses, and avoid those Type A tendencies annoy me. I like being a Type A. In fact, if I were not a Type A, I would not “be” to use some fancy Descartes logic.
Is anyone looking down the Information Superhighway to see what speeding AI vehicle is approaching? Of course not, everyone is on break or playing Foosball. Thanks, Mother MidJourney, you did not send me to the arbitration committee for my image request.
“Google Meet’s New AI Will Be Able to Go to Meetings for You” reports:
…you might never need to pay attention to another meeting again — or even show up at all.
Let’s think about this new Google service. If AI continues to advance at a reasonable pace, an AI which can attend a meeting for a person can at some point replace the person. Does that sound reasonable? What a GenZ thrill. Money for no work. The advice to take time for kicking back and living a stress free life is just fantastic.
In today’s business climate, I am not sure that delegating knowledge work to smart software is a good idea. I like to use the phrase “gradient descent.” My connotation of this jargon means a cushioned roller coaster to one or more of the Seven Deadly Sins. I much prefer intentional use of software. I still like most of the old-fashioned methods of learning and completing projects. I am happy to encounter a barrier like my search for the ultimate owners of the domain rrrrrrrrrrr.com or the methods for enabling online fraud practiced by some Internet service providers. (Sorry, I won’t name these fine outfits in this free blog post. If you are attending my keynote at the Massachusetts and New York Association of Crime Analysts’ conference in early October, say, “Hello.” In that setting, I will identify some of these outstanding companies and share some thoughts about how these folks trample laws and regulations. Sound like fun?
Google’s objective is to become the source for smart software. In that position, the company will have access to knobs and levers controlling information access, shaping, and distribution. The end goal is a quarterly financial report and the diminution of competition from annoying digital tsetse flies in my opinion.
Wouldn’t it be helpful if the “real news” looked down the Information Highway? No, of course not. For a Type A, the new “Duet” service does not “do it” for me.
Stephen E Arnold, August 31, 2023
Microsoft and Good Enough Engineering: The MSI BSOD Triviality
August 30, 2023
My line up of computers does not have a motherboard from MSI. Call me “Lucky” I guess. Some MSI product owners were not. “Microsoft Puts Little Blame on Its Windows Update after Unsupported Processor BSOD Bug” is a fun read for those who are keeping notes about Microsoft’s management methods. The short essay romps through a handful of Microsoft’s recent quality misadventures.
“Which of you broke mom’s new vase?” asks the sister. The boys look surprised. The vase has nothing to say about the problem. Thanks, MidJourney, no adjudication required for this image.
I noted this passage in the NeoWin.net article:
It has been a pretty eventful week for Microsoft and Intel in terms of major news and rumors. First up, we had the “Downfall” GDS vulnerability which affects almost all of Intel’s slightly older CPUs. This was followed by a leaked Intel document which suggests upcoming Wi-Fi 7 may only be limited to Windows 11, Windows 12, and newer.
The most helpful statement in the article in my opinion was this statement:
Interestingly, the company says that its latest non-security preview updates, ie, Windows 11 (KB5029351) and Windows 10 (KB5029331), which seemingly triggered this Unsupported CPU BSOD error, is not really what’s to blame for the error. It says that this is an issue with a “specific subset of processors”…
Like the SolarWinds’ misstep and a handful of other bone-chilling issues, Microsoft is skilled at making sure that its engineering is not the entire problem. That may be one benefit of what I call good enough engineering. The space created by certain systems and methods means that those who follow documentation can make mistakes. That’s where the blame should be placed.
Makes sense to me. Some MSI motherboard users looking at the beloved BSOD may not agree.
Stephen E Arnold, August 30, 2023
This Dinobaby Likes Advanced Search, Boolean Operators, and Precision. Most Do Not
August 28, 2023
Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.
I am not sure of the chronological age of the author of “7 Reasons to Replace Advanced Search with Filters So Users Can Easily Find What They Need.” From my point of view, the author has a mental age of someone much younger than I. The article identifies a number of reasons why “advanced search” functions are lousy. As a dinobaby, I want to be crystal clear: A user should have an interface which allows that user to locate the information required to respond in a useful way to a query.
The expert online searcher says with glee, “I love it when free online search services make finding information easy. Best of all is Amazon. It suggests so many things I absolutely need.” Hey, MidJourney, thanks for the image without suggesting Mother MJ okay my word choice. “Whoever said, ‘Nothing worthwhile comes easy’ is pretty stupid,” shouts or sliding board slider.
Advanced search in my dinobaby mental space means Boolean operators like AND, OR, and NOT, among others. Advanced search requires other meaningful “tags” specifically designed to minimize the ambiguity of words; for example, terminal can mean transportation or terminal can mean computing device. English is notable because it has numerous words which make sense only when a context is provided. Thus, a Field Code can instruct the retrieval system to discard the computing device context and retrieve the transportation context.
The write up makes clear that for today’s users training wheels are important. Are these “aids” like icons, images, bundles of results under a category dark patterns or assistance for a user. I can only imagine the push back I would receive if I were in a meeting with today’s “user experience” designers. Sorry, kids. I am a dinobaby.
I really want to work through seven reasons advanced search sucks. But I won’t. The number of people who know how to use key word search is tiny. One number I heard when I was a consultant to a certain big search engine is less than three percent of the Web search users. The good news for those who buy into the arguments in the cited article is that dinobabies will die.
Is it a lack of education? Is it laziness? Is it what most of today’s users understand?
I don’t know. I don’t care. A failure to understand how to obtain the specific information one requires is part of the long slow slide down a descent gradient. Enjoy the non-advanced search.
Stephen E Arnold, August 28, 2023