Will the Cloud Energize Google or Just Generate Marketing Material?

September 12, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read an article in Forbes (once the capitalist tool and now a tool for capitalists I think) titled “How Google Cloud Is Leveraging Generative AI To Outsmart Competition.” The competition? Does this mean AI entities in China, quasi-monopolies like Facebook (aka Meta) and Microsoft, or tiny start ups with piles of venture funding?

9 4 content marketing payoff

A decider in the publishing sector learns how to make it rain money. Is the method similar to that of the era of Yellow Journalism? Nope. The approach is squarely in line with Madison Avenue’s traditional approach. Thanks, Mother MidJourney. No red alert. Try to scramble up the gradient descent today, please.

The article’s title signals content marketing to me. As I read through the essay, it struck me as product placement.

Let me cite a couple of examples:

First, consider this passage:

Compared to Cloud TPU v4, the new Google Cloud TPU v5e has up to 2x higher training performance per dollar and up to 2.5x higher inference performance per dollar for LLMs and generative AI models. … Google is introducing Multislice technology in preview to make it easier to scale up training jobs, allowing users to quickly scale AI models beyond the boundaries of physical TPU pods—up to tens of thousands of Cloud TPU v5e or TPU v4 chips.

The “information” seems to come from a technical source proud of the advanced developments at the beloved Google. I would suggest that the information payload of the passage is zero for a person working in a Fortune 1000 company engaged in retail or financial services. In my opinion, the information is not even useful for marketing. Forbes is writing for the people not in the Google AI parade.

What about this passage?

Having its own foundation models enables Google to iterate faster based on usage patterns and customer feedback. Since the announcement of PaLM2 at Google I/O in April 2023, the company has enhanced the foundation model to support 32,000 token context windows and 38 new languages. Similarly, Codey, the foundation model for code completion, offers up to a 25% quality improvement in major supported languages for code generation and code chat. The primary benefit of owning the foundation model is the ability to customize it for specific industries and use cases.

Let’s set aside the tokens thing and the assertion about “25 percent quality improvement” and get to the point: “The primary benefit of owning the foundation model is the ability to customize it for specific industries and use cases.” To me, I think that Google wants control: The foundation, the tools for building, and the use cases. Since these are software, Google benefits because it furthers its alleged monopoly grip on information. Furthermore, Google as a super user can easily inject for fee, weaponized, or shaped content into the workflows to achieve its objective: Money. I suppose some of the people in the parade will get a payoff like a drink of Google-Ade. But the winner is Google.

My view of this “real” news write up is a recycling of comments I have offered in my essays since the days of Backrub:

  • Google’s technology is designed to allow control of information
  • The methods are those of other alleged monopolies: Control and distribution to generate money and toll booths
  • The executives are unable to break out of the high school science club bubble in which they think, explain, and operate.

I wonder if Malcolm Forbes would be happy with this “real” news about Google, the number three cloud provider making a play to mash up infrastructure, information processing, and monetization in an objective news story?

My hunch is that he would want to ride his Harley up Broadway to get away from those who have confused product placement with hard reporting.

Stephen E Arnold, September 12, 2023

AI and the Legal Eagles

September 11, 2023

Lawyers and other legal professionals know that AI algorithms, NLP, machine learning, and robotic process automation can leverage their practices. They will increase their profits, process cases faster, and increase efficiency. The possibilities for AI in legal practice appear to be win-win situation, ReadWrite discusses how different AI processes can assist law firms and the hurdles for implementation in: “Artificial Intelligence In Legal Practice: A Comprehensive Guide.”

AI will benefit law firms in streamlining research and analytics processes. Machine learning and NLP can consume large datasets faster and more efficiently than humans. Contract management and review processes will greatly be improved, because AI offers more comprehensive analysis, detects discrepancies, and decreases repetitive tasks.

AI will also lighten legal firms workloads with document automation and case management. Legal documents, such as leases, deeds, wills, loan agreements, etc., will decrease errors and reduce review time. AI will lowers costs for due diligence procedures and e-discovery through automation and data analytics. These will benefit clients who want speedy results and low legal bills.

Law firms will benefit the most from NLP applications, predictive analytics, machine learning algorithms, and robotic process automation. Virtual assistants and chatbots also have their place in law firms as customer service representatives.

Despite all the potential improvements from AI, legal professionals need to adhere to data privacy and security procedures. They must also develop technology management plans that include, authentication protocols, backups, and identity management strategies. AI biases, such as diversity and sexism issues, must be evaluated and avoided in legal practices. Transparency and ethical concerns must also be addressed to be compliant with governmental regulations.

The biggest barriers, however, will be overcoming reluctant staff, costs, anticipating ROI, and compliancy with privacy and other regulations.

“With a shift from viewing AI as an expenditure to a strategic advantage across cutting-edge legal firm practices, embracing the power of artificial intelligence demonstrates significant potential for intense transformation within the industry itself.”

These challenges are not any different from past technology implementations, except AI could make lawyers more reliant on technology than their own knowledge. Cue the Jaws theme music.

Whitney Grace, September 11, 2023

Fortune, Trust, and Smart Software: A Delightful Confection

September 8, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Trust. I see this word bandied about like a digital shuttlecock whacked by frantic influencers, pundits, and poobahs. Fortune Magazine likes the idea of trust and uses it in this headline: “Silicon Valley’s Elites Can’t Be Trusted with the Future of AI. We Must Break Their Dominance–and Dangerous God Complex.” The headline  is interesting. First, this if Fortune Magazine. Like Forbes in its pre-sponsored content days was a “capitalist tool.” Fortune Magazine was the giant PR megaphone for making money. Now Forbes is content marketing, and Fortune Magazine is not exactly a fan of modern Silicon Valley high school science club management.  The clue is the word “trust” in the context of the phrase “God complex.”

9 3 definant employee

A senior manager demonstrates a lack of support for a subordinate who does not warrant trust. Does the subordinate look happy? Thanks, MidJourney. No red warning banners for this original art. You are, however, still on the gradient descent I fear.

The write up includes a number of interesting statements. I want to highlight two of these and offer a couple of observations. No, I won’t trot out my favorite “Where have you been for the last 25 years? Collecting Google swag and Zuckbook coffee mugs?”

The first passage I noticed was:

Research shows the market dysfunction created by Google, Amazon, Facebook, and other large players that dominate e-commerce, advertising, and online information-sharing. Big Tech monopolists are already positioning themselves to dominate AI. The shortage of GPUs and massive lobbying dollars spent requesting expensive regulation that would lock out startups are just two examples of this troubling trend.

Yo, Fortune, what do monopolies do? Are these outfits into garden parties for homeless children and cleaning up the environment for the good of walruses? The Fortune Magazine of 2023 would probably complain about Co0rnelius Vanderbilt’s treatment of the business associate he beat and tossed into the street.

The second passage warranting a red checkmark was:

AI will fundamentally change society and billions of lives. Its development is too important to be left to the hubris of Silicon Valley’s elites. India is well positioned to break their dominance and level the AI playing field, accelerating innovation and benefiting all of humankind.

Oh, oh. The U.S. of A. is no longer the sure-fire winner for the sharp pencil people at Fortune Magazine.

Several observations:

  1. The Silicon Valley method has worn thin for Manhattan folk
  2. India is the new big dog
  3. Trust is in vogue.

Okay.

Stephen E Arnold, September 8, 2023

A New Fear: Riding the Gradient Descent to Unemployment

September 8, 2023

Is AI poised to replace living, breathing workers? A business professor from Harvard (the ethics hot spot) reassures us (sort of), “AI Won’t Replace Humans—But Humans with AI Will Replace Humans Without AI.” Harvard Business Review‘s Adi Ignatius interviewed AI scholar Karim Lakhani, who insists AI is a transformational technology on par with the Web browser. Companies and workers in all fields, he asserts, must catch up then keep up or risk being left behind. The professor states:

“This transition is really inevitable. And for the folks that are behind, the good news is that the cost to make the transition keeps getting lower and lower. The playbook for this is now well-known. And finally, the real challenge is not a technological challenge. I would say that’s like a 30% challenge. The real challenge is 70%, which is an organizational challenge. My great colleague Tsedal Neeley talks about the digital mindset. Every executive, every worker needs to have a digital mindset, which means understanding how these technologies work, but also understanding the deployment of them and then the change processes you need to do in terms of your organization to make use of them.”

Later, he advises:

“The first step is to begin, start experimentation, create the sandboxes, run internal bootcamps, and don’t just run bootcamps for technology workers, run bootcamps for everybody. Give them access to tools, figure out what use cases they develop, and then use that as a basis to rank and stack them and put them into play.”

Many of those use cases will be predictable. Many more will be unforeseen. One thing we can anticipate is this: users will rapidly acclimate to technologies that make their lives easier. Already, Lakhani notes, customer expectations have been set by AI-empowered big tech. People expect their Uber to show up within minutes and whisk them away or for an Amazon transaction dispute to be resolved instantly. Younger customers have less and less patience for businesses that operate in slower, antiquated ways. Will companies small, medium, and large have to embrace AI or risk becoming obsolete?

Cynthia Murrell, September 8, 2023

We Are from a Big Outfit. We Are Here to Help You. No, Really.

September 7, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

“Greetings, creators,” says the sincere (if smarmy voice of the Google things). “We are here to help you.”

Better listen up. The punishment may range from becoming hard to find (what’s new?) or loss of revenue.

9 7 student at door

The cheerful young and well paid professional, smiles at the creator and says, “Good morning, I am from a certain alleged monopoly. I am definitely here to help you.” Thanks, MidJourney. The gradient descent is allowing your coefficient of friction to be reduced.

I read “YouTube Advertising Formats.” I love the lack of a date on the write up. Metadata are often helpful. I like document version numbers too. As a dinobaby, I like the name of a person who allegedly wrote the article; for example, Mr. Nadella signs his blog posts about the future of the universe.

The write up makes one big point in my opinion: Creators lose control over ads shown before, during, and after their content is pushed to a user of YouTube and whatever other media the new, improved “smart” Google will offer its “users.”

Here’s how the Google makes sure a creator spots the important “fewer controls” message:

image

I love those little triangles and the white exclamation points. Very cool.

Why is this change taking place at this time? Here are my thoughts:

  1. Users of YouTube are not signing up for ad-free YouTube. The change makes it possible for Google to hose more “relevant” ads into the creators’ content.
  2. Users of YouTube are clicking the “skip” button far too frequently. What’s the fix? You cannot skip so much, pal.
  3. Google is indeed concerned about ad revenue flow. Despite the happy talk about Google’s revenue, the push to smart software has sparked an appetite for computation. The simple rule is: More compute means more costs.

Is there a fix? Sure, but those adjustments require cash to fund an administrative infrastructure and time to figure out how to leverage options like TikTok and the Zuckbook. Who has time and money? Perhaps a small percentage of creators?

Net net: In an unregulated environment and with powerless “creators,” the Google is here to help itself and maybe some others not so much.

Stephen E Arnold, September 7, 2023

Vaporware: It Costs Little and May Pay Off Big

September 6, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Since ChatGPT and assorted AI image-creation tools burst onto the scene, it seems generative AI is all anyone in the tech world can talk about. Some AI companies have been valued in the billions by those who expect trillion-dollar markets. But, asks Gary Marcus of Marcus on AI, “What if Generative AI Turned Out To Be a Dud?

Might it be the industry has leapt before looking? Marcus points out generative AI revenues are estimated in just the hundreds of millions so far. He describes reasons the field may never satisfy expectations, like pervasive bias, that pesky hallucination problem, and the mediocrity of algorithmic prose. He also notes people seem to be confusing generative AI with theoretical Artificial General Intelligence (AGI), which is actually much further from being realized. See the write-up for those details.

As disastrous as unrealized AI dreams may be for investors, Marcus is more concerned about policy decisions being made on pure speculation. He writes:

“On the global front, the Biden administration has both limited access to high-end hardware chips that are (currently) essential for generative AI, and limited investment in China; China’s not exactly being warm towards global cooperation either. Tensions are extremely high, and a lot of it to revolve around dreams about who might ‘win the AI war.’ But what if it the winner was nobody, at least not any time soon?”

On the national level, Marcus observes, important efforts to protect consumers from bias, misinformation, and privacy violations are being hampered by a perceived need to develop the technology as soon as possible. The post continues:

“We might not get the consumer protections we need, because we are trying to foster something that may not grow as expected. I am not saying anyone’s particular policies are wrong, but if the premise that generative AI is going to be bigger than fire and electricity turns out to be mistaken, or at least doesn’t bear out in the next decade, it’s certainly possible that we could wind up with what in hindsight is a lot of needless extra tension with China, possibly even a war in Taiwan, over a mirage, along with a social-media level fiasco in which consumers are exploited in news, and misinformation rules the day because governments were afraid to clamp down hard enough.”

Terrific.

Cynthia Murrell, September 6, 2023

Meta Play Tactic or Pop Up a Level. Heh Heh Heh

September 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Years ago I worked at a blue chip consulting firm. One of the people I interacted with had a rhetorical ploy when cornered in a discussion. The wizard would say, “Let’s pop up a level.” He would then shift the issue which was, for example, a specific problem, into a higher level concept and bring his solution into the bigger context.

8 31 pop up a level

The clever manager pop up a level to observe the lower level tasks from a broader view. Thank Mother MJ. Not what I specified, but the gradient descent is alive and well.

Let’s imagine that the topic is a plain donut or a chocolate covered donut with sprinkles. There are six people in the meeting. The discussion is contentious because that’s what blue chip consulting Type As do: Contention, sometime nice, sometimes not. The “pop up a level” guy says, “Let pop up a level. The plain donut has less sugar. We are concerned about everyone’s health, right? The plain donut does not have so much evil, diabetes linked sugar. It makes sense to just think of health and obviously the decreased risk for increasing the premiums for health insurance.” Unless one is paying attention and not eating the chocolate chip cookies provided for the meeting attendees, the pop-up-a-level approach might work.

A current example of pop-up-a-level thinking, navigate to “Designing Deep Networks to Process Other Deep Networks.” Nvidia is in hog heaven with the smart software boom. The company realizes that there are lots of people getting in the game. The number of smart software systems and methods, products and services, grifts and gambles, and heaven knows what else is increasing. Nvidia wants to remain the Big Dog even though some outfits wants to design their own chips or be like Apple and maybe find a way to do the Arm thing. Enter the pop-up-a-level play.

The write up says:

The solution is to use convolutional neural networks. They are designed in a way that is largely “blind” to the shifting of an image and, as a result, can generalize to new shifts that were not observed during training…. Our main goal is to identify simple yet effective equivariant layers for the weight-space symmetries defined above. Unfortunately, characterizing spaces of general equivariant functions can be challenging. As with some previous studies (such as Deep Models of Interactions Across Sets), we aim to characterize the space of all linear equivariant layers.

Translation: Our system and method can make use of any other accessible smart software plumbing. Stick with Nvidia.

I think the pop-up-a-level approach is a useful one. Are the competitors savvy enough to counter the argument?

Stephen E Arnold, September 4, 2023

Planning Ahead: Microsoft User Agreement Updates To Include New AI Stipulations

September 4, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Microsoft is eager to capitalize on its AI projects, but first it must make sure users are legally prohibited from poking around behind the scenes. For good measure, it will also ensure users take the blame if they misuse its AI tools. “Microsoft Limits Use of AI Services in Upcoming Services Agreement Update,” reports Ghacks.net. Writer Martin Brinkman notes these services include but are not limited to Bing Chat, Windows Copilot, Microsoft Security Copilot, Azure AI platform, and Teams Premium. We learn:

“Microsoft lists five rules regarding AI Services in the section. The rules prohibit certain activity, explain the use of user content and define responsibilities. The first three rules limit or prohibit certain activity. Users of Microsoft AI Services may not attempt to reverse engineer the services to explore components or rulesets. Microsoft prohibits furthermore that users extract data from AI services and the use of data from Microsoft’s AI Services to train other AI services. … The remaining two rules handle the use of user content and responsibility for third-party claims. Microsoft notes in the fourth entry that it will process and store user input and the output of its AI service to monitor and/or prevent ‘abusive or harmful uses or outputs.’ Users of AI Services are also solely responsible regarding third-party claims, for instance regarding copyright claims.”

Another, non-AI related change is that storage for one’s Outlook.com attachments will soon affect OneDrive storage quotas. That could be an unpleasant surprise for many when changes take effect on September 30. Curious readers can see a summary of the changes here, on Microsoft’s website.

Cynthia Murrell, September 4, 2023

Google: Another Modest Proposal to Solve an Existential Crisis. No Big Deal, Right?

September 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I am fascinated with corporate “do goodism.” Many people find themselves in an existential crisis anchored in zeros and ones. Is the essay submitted as original work the product of an industrious 15 year old? Or, is the essay the 10 second output of a smart software system like ChatGPT or You.com? Is that brilliant illustration the labor of a dedicated 22 year old laboring in a cockroach infested garage in in Corona, Queens? Or, was the art used in this essay output in about 60 seconds by my trusted graphic companion Mother MidJourney?

9 1 hidden pixel

“I see the watermark. This is a fake!” exclaims the precocious lad. This clever middle school student has identified the super secret hidden clue that this priceless image is indeed a fabulous fake. How could a young person detect such a sophisticated and subtle watermark? The is, “Let’s overestimate our capabilities and underestimate those of young people who are skilled navigators of the digital world?”

Queens and what’s this “modest proposal” angle. Jonathan Swift beat this horse until it died in the late 17th century. I think the reference makes a bit of sense. Mr. Swift proposed simple solutions to big problems. “DeepMind Develops Watermark to Identify AI Images” explains:

Google’s DeepMind is trialling [sic] a digital watermark that would allow computers to spot images made by artificial intelligence (AI), as a means to fight disinformation. The tool, named SynthID, will embed changes to individual pixels in images, creating a watermark that can be identified by computers but remains invisible to the human eye. Nonetheless, DeepMind has warned that the tool is not “foolproof against extreme image manipulation.

Righto, it’s good enough. Plus, the affable crew at Alphabet Google YouTube are in an ideal position to monitor just about any tiny digital thing in the interwebs. Such a prized position as de facto ruler of the digital world makes it easy to flag and remove offending digital content with the itty bitty teenie weeny  manipulated pixel thingy.

Let’s assume that everyone, including the young fake spotter in the Mother MJ image accompany this essay gets to become the de facto certifier of digital content. What are the downsides?

Gee, I give up. I cannot think of one thing that suggests Google’s becoming the chokepoint for what’s in bounds and what’s out of bounds. Everyone will be happy. Happy is good in our stressed out world.

And think of the upsides? A bug might derail some creative work? A system crash might nuke important records about a guilty party because pixels don’t lie? Well, maybe just a little bit. The Google intern given the thankless task of optimizing image analysis might stick in an unwanted instruction. So what? The issue will be resolved in a court, and these legal proceedings are super efficient and super reliable.

I find it interesting that the article does not see any problem with the Googley approach. Like the Oxford research which depended upon Facebook data, the truth is the truth. No problem. Gatekeepers and certification authority are exciting business concepts.

Stephen E Arnold, September 1, 2023

YouTube Content: Are There Dark Rabbit Holes in Which Evil Lurks? Come On Now!

September 1, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Google has become a cultural touchstone. The most recent evidence is a bit of moral outrage in Popular Science. Now the venerable magazine is PopSci.com, and the Google has irritated the technology explaining staff. Navigate to “YouTube’s Extremist Rabbit Holes Are Deep But Narrow.”

8 30 shout

Google, your algorithm is creating rabbit holes. Yes, that is a technical term,” says the PopSci technology expert. Thanks for a C+ image MidJourney.

The write up asserts:

… exposure to extremist and antagonistic content was largely focused on a much smaller subset of already predisposed users. Still, the team argues the platform “continues to play a key role in facilitating exposure to content from alternative and extremist channels among dedicated audiences.” Not only that, but engagement with this content still results in advertising profits.

I think the link with popular science is the “algorithm.” But the write up seems to be more a see-Google-is-bad essay. Science? No. Popular? Maybe?

The essay concludes with this statement:

While continued work on YouTube’s recommendation system is vital and admirable, the study’s researchers echoed that, “even low levels of algorithmic amplification can have damaging consequences when extrapolated over YouTube’s vast user base and across time.” Approximately 247 million Americans regularly use the platform, according to recent reports. YouTube representatives did not respond to PopSci at the time of writing.

I find the use of the word “admirable” interesting. Also, I like the assertion that algorithms can do damage. I recall seeing a report that explained social media is good and another study pitching the idea that bad digital content does not have a big impact. Sure, I believe these studies, just not too much.

Google has a number of buns in the oven. The firm’s approach to YouTube appears to be “emulate Elon.” Content moderation will be something with a lower priority than keeping tabs on Googlers who don’t come to the office or do much Google work. My suggestion for Popular Science is to do a bit more science, and a little less quasi-MBA type writing.

Stephen E Arnold, September 1, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta