Apple and a Recycled Carnival Act: Woo Woo New New!

May 13, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

A long time ago, for a project related to a new product which was cratering, one person on my team suggested I read a book by James B. Twitchell. Carnival Culture: The Trashing of Taste in America provided a broad context, but the information in the analysis of taste was not going to save the enterprise software I was supposed to analyze. In general, I suggest that investment outfits with an interest in online information give me a call before writing checks to the tale-spinning entrepreneurs.

image

A small creative spark getting smashed in an industrial press. I like the eyes. The future of humans in Apple’s understanding of the American datasphere. Wow, look at those eyes. I can hear the squeals of pain, can’t you?

Dr. Twitchell did a good job, in my opinion, of making clear that some cultural actions are larger than a single promotion. Popular movies and people like P.T. Barnum (the circus guy) explain facets of America. These two examples are not just entertaining; they are making clear what revs the engines of the US of A.

I read “Hating Apple Goes Mainstream” and realized that Apple is doing the marketing for which it is famous. The roll out of the iPad had a high resolution, big money advertisement. If you are around young children, squishy plastic toys are often in small fingers. Squeeze the toy and the eyes bulge. In the image above, a child’s toy is smashed in what seems to me be the business end of a industrial press manufactured by MSE Technology Ltd in Turkey.

image

Thanks, MSFT Copilot. Glad you had time to do this art. I know you are busy on security or is it AI or is AI security or security AI? I get so confused.

The Apple iPad has been a bit of an odd duck. It is a good substitute for crappy Kindle-type readers. We have a couple, but they don’t get much use. Everything is a pain for me because the super duper Apple technology does not detect my fingers. I bought the gizmos so people could review the PowerPoint slides for one of my lectures at a conference. I also experimented with the iPad as a teleprompter. After a couple of tests, getting content on the device, controlling it, and fiddling so the darned thing knew I was poking the screen to cause an action — I put the devices on the shelf.

Forget the specific product, let’s look at the cited write ups comments about the Apple “carnival culture” advertisement. The write up states:

Apple has lost its presumption of good faith over the last five years with an ever-larger group of people, and now we’ve reached a tipping point. A year ago, I’m sure this awful ad would have gotten push back, but I’m also sure we’d heard more “it’s not that big of a deal” and “what Apple really meant to say was…” from the stalwart Apple apologists the company has been able to count on for decades. But it’s awfully quiet on the fan-boy front.

I think this means the attempt to sell sent weird messages about a company people once loved. What’s going on, in my opinion, is that Apple is explaining what technology is going to do to people who once used software to create words, images, and data exhaust will be secondary to cosmetics of technology.

In short, people and their tools will be replaced by a gizmo or gizmos that are similar to bright lights and circus posters. What do these artifacts tell us. My take on the Apple iPad M4 super duper creative juicer is, at this time:

  1. So what? I have an M2 Air, and it does what I hoped the two touch insensitive iPads would do.
  2. Why create a form factor that is likely to get crushed when I toss my laptop bad on a security screening belt? Apple’s products are, in my view, designed to be landfill residents.
  3. Apple knows in its subconscious corporate culture heat sink that smart software, smart services, and dumb users are the future. The wonky expensive high-resolution shouts, “We know you are going to be out of job. You will be like the yellow squishy toy.”

The message Apple is sending is that innovation has moved from utility to entertainment to the carnival sideshow. Put on your clown noses, people. Buy Apple.

Stephen E Arnold, May 13, 2024

Will Google Behave Like Telegram?

May 10, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I posted a short item on LinkedIn about Telegram’s blocking of Ukraine’s information piped into Russia via Telegram. I pointed out that Pavel Durov, the founder of VK and Telegram, told Tucker Carlson that he was into “free speech.” A few weeks after the interview, Telegram blocked the data from Ukraine for Russia’s Telegram users. One reason given, as I recall, was that Apple was unhappy. Telegram rolled over and complied with a request that seems to benefit Russia more than Apple. But that’s just my opinion. The incident, which one of my team verified with a Ukrainian interacting with senior professionals in Ukraine, the block. Not surprisingly, Ukraine’s use of Telegram is under advisement. I think that means, “Find another method of sending encrypted messages and use that.” Compromised communications can translate to “Rest in Peace” in real time.

image

A Hong Kong rock band plays a cover of the popular hit Glory to Hong Kong. The bats in the sky are similar to those consumed in Shanghai during a bat festival. Thanks, MSFT Copilot. What are you working on today? Security or AI?

I read “Hong Kong Puts Google in Hot Seat With Ban on Protest Song.” That news story states:

The Court of Appeal on Wednesday approved the government’s application for an injunction order to prevent anyone from playing Glory to Hong Kong with seditious intent. While the city has a new security law to punish that crime, the judgment shifted responsibility onto the platforms, adding a new danger that just hosting the track could expose companies to legal risks. In granting the injunction, judges said prosecuting individual offenders wasn’t enough to tackle the “acute criminal problems.”

What’s Google got to do with it that toe tapper Glory to Hong Kong?

The write up says:

The injunction “places Google, media platforms and other social media companies in a difficult position: Essentially pitting values such as free speech in direct conflict with legal obligations,” said Ryan Neelam, program director at the Lowy Institute and former Australian diplomat to Hong Kong and Macau. “It will further the broader chilling effect if foreign tech majors do comply.”

The question is, “Roll over as Telegram allegedly has, or fight Hong Kong and by extension everyone’s favorite streaming video influencer, China?” What will Google do? Scrub Glory to Hong Kong, number one with a bullet on someone’s hit parade I assume.

My guess is that Google will go to court, appeal, and then take appropriate action to preserve whatever revenue is at stake. I do know The Sundar & Prabhakar Comedy Show will not use Glory to Hong Kong as its theme for its 2024 review.

Stephen E Arnold, May 10, 2024

Microsoft and Its Customers: Out of Phase, Orthogonal, and Confused

May 9, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I am writing this post using something called Open LiveWriter. I switched when Microsoft updated our Windows machines and killed printing, a mouse linked via a KVM, and the 2012 version of its blog word processing software. I use a number of software products, and I keep old programs in order to compare them to modern options available to a user. The operative point is that a Windows update rendered the 2012 version of LiveWriter lost in the wonderland of Windows’ Byzantine code.

image

A young leader of an important project does not want to hear too much from her followers. In fact, she wishes they would shut up and get with the program. Thank, MSFT Copilot. How’s the Job One of security coming today?

There are reports, which I am not sure I believe, that Windows 11 is a modern version of Windows Vista. The idea is that users are switching to Windows 10. Well, maybe. But the point is that users are not happy with Microsoft’s alleged changes to Windows; for instance:

  1. Notifications (advertising) in the Windows 11 start menu
  2. Alleged telemetry which provides a stream of user action and activity data to Microsoft for analysis (maybe marketing purposes?)
  3. Gratuitous interface changes which range from moving control items from a control panel to a settings panel to fiddling with task manager
  4. Wonky updates like the printer issue, driver wonkiness, and smart help which usually returns nothing of much help.

I read “This Third-Party App Blocks Integrated Windows 11 Advertising.” You can read the original article  to track down this customization tool. My hunch is that its functions will be intentionally blocked by some bonus centric Softie or a change to the basic Windows 11 control panel will cause the software to perform like LiveWriter 2012.

I want to focus on a comment to the cited article written by seeprime:

Microsoft has seriously degraded File Explorer over the years. They should stop prolonging the Gates culture of rewarding software development, of new and shiny things, at the expense of fixing what’s not working optimally.

Now that security, not AI and not Windows 11, are the top priority at Microsoft, will the company remediate the grouses users have about the product? My answer is, “No.” Here’s why:

  1. Fixing, as seeprime, suggests is less important that coming up with some that seems “new.” The approach is dangerous because the “new” thing may be developed by someone uninformed about the hidden dependencies within what is code as convoluted as Google’s search plumbing. “New” just breaks the old or the change is something that seems “new” to an intern or an older Softie who just does not care. Good enough is the high bar to clear.
  2. Details are not Microsoft’s core competency. Indeed, unlike Google, Microsoft has many revenue streams, and the attention goes to cooking up new big-money services like a version of Copilot which is not exposed to the Internet for its government customers. The cloud, not Windows, is the future.
  3. Microsoft whether it knows it or not is on the path to virtualize desktop and mobile software. The idea means that Microsoft does not have to put up with developers who make changes Microsoft does not want to work. Putting Windows in the cloud might give Microsoft the total control it desires.
  4. Windows is a security challenge. The thinking may be: “Let’s put Windows in the cloud and lock down security, updates, domain look ups, etc. I would suggest that creating one giant target might introduce some new challenges to the Softie vision.

Speculation aside, Microsoft may be at a point when users become increasingly unhappy. The mobile model, virtualization, and smart interfaces might create tasty options for users in the near future. Microsoft cannot make up its mind about AI. It has the OpenAI deal; it has the Mistral deal; it has its own internal development; and it has Inflection and probably others I don’t know about.

Microsoft cannot make up its mind. Now Microsoft is doing an about face and saying, “Security is Job One.” But there’s the need to make the Azure Cloud grow. Okay, okay, which is it? The answer, I think, is, “We want to do it all. We want everything.”

This might be difficult. Users might just pile up and remain out of phase, orthogonal, and confused. Perhaps I could add angry? Just like LiveWriter: Tossed into the bit trash can.

Stephen E Arnold, May 9. 2024

Buffeting AI: A Dinobaby Is Nervous

May 7, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I am not sure the “go fast” folks are going to be thrilled with a dinobaby rich guy’s view of smart software. I read “Warren Buffett’s Warning about AI.” The write up included several interesting observations. The only problem is that smart software is out of the bag. Outfits like Meta are pushing the open source AI ball forward. Other outfits are pushing, but Meta has big bucks. Big bucks matter in AI Land.

image

Yes, dinobaby. You are on the right wavelength. Do you think anyone will listen? I don’t. Thanks, MSFT Copilot. Keep up the good work on security.

Let’s look at a handful of statements from the write up and do some observing while some in the Commonwealth of Kentucky recover from the Derby.

First, the oracle of Omaha allegedly said:

“When you think about the potential for scamming people… Scamming has always been part of the American scene. If I was interested in investing in scamming— it’s gonna be the growth industry of all time.”

Mr. Buffet has nailed the scamming angle. I particularly liked the “always.” Imagine a country built upon scamming. That makes one feel warm and fuzzy about America. Imagine how those who are hostile to US interests interpret the comment. Ill will toward the US can now be based on the premise that “scamming has always been part of the American scene.” Trust us? Just ignore the oracle of Omaha? Unlikely.

Second, the wise, frugal icon allegedly communicated that:

the technology would affect “anything that’s labor sensitive” and that for workers it could “create an enormous amount of leisure time.”

What will those individuals do with that “leisure time”? Gobbling down social media? Working on volunteer projects like picking up trash from streets and highways?

The final item I will cite is his 2018 statement:

“Cyber is uncharted territory. It’s going to get worse, not better.”

Is that a bit negative?

Stephen E Arnold, May 7, 2024

Trust the Internet? Sure and the Check Is in the Mail

May 3, 2024

dino-10-19-timeline-333-fix-4_thumbThis essay is the work of a dumb humanoid. No smart software involved.

  

When the Internet became common place in schools, students were taught how to use it as a research tool like encyclopedias and databases. Learning to research is better known as information literacy and it teaches critical evaluation skills. The biggest takeaway from information literacy is to never take anything at face value, especially on the Internet. When I read CIRA and Continuum Loops’ report, “A Trust Layer For The Internet Is Emerging: A 2023 Report,” I had my doubts.

CIRA is the Canadian Internet Registration Authority, a non-profit organization that supposedly builds a trusted Internet. CIRA acknowledges that as a whole the Internet lacks a shared framework and tool sets to make it trustworthy. The non-profit states that there are small, trusted pockets on the Internet, but they sacrifice technical interoperability for security and trust.

CIRA released a report about how people are losing faith in the Internet. According to the report’s executive summary, the number of Canadians who trust the Internet fell from 71% to 57% while the entire world went from 74% to 63%. The report also noted that companies with a high trust rate outperform their competition. Then there’s this paragraph:

“In this report, CIRA and Continuum Loop identify that pairing technical trust (e.g., encryption and signing) and human trust (e.g., governance) enables a trust layer to emerge, allowing the internet community to create trustworthy digital ecosystems and rebuild trust in the internet as a whole. Further, they explore how trust registries help build trust between humans and technology via the systems of records used to help support these digital ecosystems. We’ll also explore the concept of registry of registries (RoR) and how it creates the web of connections required to build an interoperable trust layer for the internet.”

Does anyone else hear the TLA for Whiskey Tango Foxtrot in their head? Trusted registries sound like a sales gimmick to verify web domains. There are trusted resources on the Internet but even those need to be fact checked. The companies that have secure networks are Microsoft, TikTok, Google, Apple, and other big tech, but the only thing that can be trusted about some outfits are the fat bank accounts.

Whitey Grace, May 3, 2024

AI: Strip Mining Life Itself

May 2, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I may be — like a AI system — hallucinating. I think I am seeing more philosophical essays and medieval ratio recently. A candidate expository writing is “To Understand the Risks Posed by AI, Follow the Money.” After reading the write up, I did not get a sense that the focus was on following the money. Nevertheless, I circled several statements which caught my attention.

Let’s look at these, and you may want to navigate to the original essay to get each statement’s context.

First, the authors focus on what they as academic thinkers call “an extractive business model.” When I saw the term, I thought of the strip mines in Illinois. Giant draglines stripped the earth to expose coal. Once the coal was extracted, the scarred earth was bulldozed into what looked like regular prairie. It was not. Weeds grew. But to get corn or soy beans, the farmer had to spend big bucks to get chemicals and some Fancy Dan equipment to coax the trashed landscape to utility. Nice.

The essay does not make the downside of extractive practices clear. I will. Take a look at a group of teens in a fast food restaurant or at a public event. The group is a consequence of the online environment in which the individual spends hours each day. I am not sure how well the chemicals and equipment used to rehabilitate the strip minded prairie applies to humans, but I assume someone will do a study and report.

image

The second statement warranting a blue exclamation mark is:

Algorithms have become market gatekeepers and value allocators, and are now becoming producers and arbiters of knowledge.

From my perspective, the algorithms are expressions of human intent. The algorithms are not the gatekeepers and allocators. The algorithms express the intent, goals, and desire of the individuals who create them. The “users” knowingly or unknowingly give up certain thought methods and procedures to provide what appears to be something scratches a Maslow’s Hierarchy of Needs’ itch. I think in terms of the medieval Great Chain of Being. The people at the top own the companies. Their instrument of control is their service. The rest of the hierarchy reflects a skewed social order. A fish understands only the environment of the fish bowl. The rest of the “world” is tough to perceive and understand. In short, the fish is trapped. Online users (addicts?) are trapped.

The third statement I marked is:

The limits we place on algorithms and AI models will be instrumental to directing economic activity and human attention towards productive ends.

Okay, who exactly is going to place limits? The farmer who leased his land to the strip mining outfit made a decision. He traded the land for money. Who is to blame? The mining outfit? The farmer? The system which allowed the transaction?

The situation at this moment is that yip yap about open source AI and the other handwaving cannot alter the fact that a handful of large US companies and a number of motivated nation states are going to spend what’s necessary to obtain control.

Net net: Houston, we have a problem. Money buys power. AI is a next generation way to get it.

Stephen E Arnold, May 2, 2024

Using AI But For Avoiding Dumb Stuff One Hopes

May 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read an interesting essay called “How I Use AI To Help With TechDirt (And, No, It’s Not Writing Articles).” The main point of the write up is that artificial intelligence or smart software (my preferred phrase) can be useful for certain use cases. The article states:

I think the best use of AI is in making people better at their jobs. So I thought I would describe one way in which I’ve been using AI. And, no, it’s not to write articles. It’s basically to help me brainstorm, critique my articles, and make suggestions on how to improve them.

image

Thanks, MSFT Copilot. Bad grammar and an incorrect use of the apostrophe. Also, I was much dumber looking in the 9th grade. But good enough, the motto of some big software outfits, right?

The idea is that an AI system can function as a partner, research assistant, editor, and interlocutor. That sounds like what Microsoft calls a “copilot.” The article continues:

I initially couldn’t think of anything to ask the AI, so I asked people in Lex’s Discord how they used it. One user sent back a “scorecard” that he had created, which he asked Lex to use to review everything he wrote.

The use case is that smart software function like Miss Dalton, my English composition teacher at Woodruff High School in 1958. She was a firm believer in diagramming sentences, following the precepts of the Tressler & Christ textbook, and arcane rules such as capitalizing the first word following a color (correctly used, of course).

I think her approach was intended to force students in 1958 to perform these word and text manipulations automatically. Then when we trooped to the library every month to do “research” on a topic she assigned, we could focus on the content, the logic, and the structural presentation of the information. If you attend one of my lectures, you can see that I am struggling to live up to her ideals.

However, when I plugged in my comments about Telegram as a platform tailored to obfuscated communications, the delivery of malware and X-rated content, and enforcing a myth that the entity known as Mr. Durov does not cooperate with certain entities to filter content, AI systems failed miserably. Not only were the systems lacking content, one — Microsoft Copilot, to be specific — had no functional content of collapse. Two other systems balked at the idea of delivering CSAM within a Group’s Channel devoted to paying customers of what is either illegal or extremely unpleasant content.

Several observations are warranted:

  1. For certain types of content, the systems lack sufficient data to know what the heck I am talking about
  2. For illegal activities, the systems are either pretending to be really stupid or the developers have added STOP words to the filters to make darned sure to improper output would be presented
  3. The systems’ are not up-to-date; for example, Mr. Durov was interviewed by Tucker Carlson a week before Mr. Durov blocked Ukraine Telegram Groups’ content to Telegram users in Russia.

Is it, therefore, reasonable to depend on a smart software system to provide input on a “newish” topic? Is it possible the smart software systems are fiddled by the developers so that no useful information is delivered to the user (free or paying)?

Net net: I am delighted people are finding smart software useful. For my lectures to law enforcement officers and cyber investigators, smart software is as of May 1, 2024, not ready for prime time. My concern is that some individuals may not discern the problems with the outputs. Writing about the law and its interpretation is an area about which I am not qualified to comment. But perhaps legal content is different from garden variety criminal operations. No, I won’t ask, “What’s criminal?” I would rather rely on Miss Dalton taught in 1958. Why? I am a dinobaby and deeply skeptical of probabilistic-based systems which do not incorporate Kolmogorov-Arnold methods. Hey, that’s my relative’s approach.

Stephen E Arnold, May 1, 2024

Big Tech and Their Software: The Tent Pole Problem

May 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I remember a Boy Scout camping trip. I was a Wolf Scout at the time, and my “pack” had the task of setting up our tent for the night. The scout master was Mr. Johnson, and he left it us. The weather did not cooperate; the tent pegs pulled out in the wind. The center tent pole broke. We stood in the rain. We knew the badge for camping was gone, just like a dry place to sleep. Failure. Whom could we blame? I suggested, “McKinsey & Co.” I had learned that third-parties were usually fall guys. No one knew what I was talking about.

4 27 tent collapse

Okay, ChatGPT, good enough.

I thought about the tent pole failure, the miserable camping experience, and the need to blame McKinsey or at least an entity other than ourselves. The memory surfaced as I read “Laws of Software Evolution.” The write up sets forth some ideas which may not be firm guidelines like those articulated by the World Court, but they are about as enforceable.

Let’s look at the laws explicated in the essay.

The first law is that software is to support a real-world task. As result (a corollary maybe?) is that the software has to evolve. That is the old chestnut ““No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.” The problem is change, which consumes money and time. As a result, original software is wrapped, peppered with calls to snappy new modules designed to fix up or extend the original software.

The second law is that when changes are made, the software construct becomes more complex. Complexity is what humans do. A true master makes certain processes simple. Software has artists, poets, and engineers with vision. Simple may not be a key component of the world the programmer wants to create. Thus, increasing complexity creates surprises like unknown dependencies, sluggish performance, and a giant black hole of costs.

The third law is not explicitly called out like Laws One and Two. Here’s my interpretation of the “lurking law,” as I have termed it:

Code can be shaped and built upon.

My reaction to this essay is positive, but the link to evolution eludes me. The one issue I want to raise is that once software is built, deployed, and fiddled with it is like a river pier built by Roman engineers.  Moving the pier or fixing it so it will persist is a very, very difficult task. At some point, even the Roman concrete will weather away. The bridge or structure will fall down. Gravity wins. I am okay with software devolution.

The future, therefore, will be stuffed with software breakdowns. The essay makes a logical statement:

… we should embrace the malleability of code and avoid redesign processes at all costs!

Sorry. Won’t happen. Woulda, shoulda, and coulda cannot do the job.

Stephen E Arnold, May 1, 2024

One Half of the Sundar & Prabhakar Act Gets Egged: Garrf.

April 30, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

After I wrote Google Version 2: The Calculating Predator, BearStearns bought the rights to portions of my research and published one of its analyst reports. In that report, a point was made about Google’s research into semantic search. Remember, this was in 2005, long before the AI balloon inflated to the size of Taylor Swift’s piggy bank. My client (whom I am not allowed to name) and I were in the Manhattan BearStearns’ office. We received a call from Prabhakar Raghavan, who was the senior technology something at Yahoo at that time. I knew of Dr. Raghavan because he had been part of the Verity search outfit. On that call, Dr. Raghavan was annoyed that BearStearns suggested Yahoo was behind the eight ball in Web search. We listened, and I pointed out that Yahoo was not matching Google’s patent filing numbers. Although not an indicator of innovation, it is one indicator. The Yahoo race car had sputtered and had lost the search race. I recall one statement Dr. Raghavan uttered, “I can do a better search engine for $300,000 dollars.” Well, I am still waiting. Dr. Raghavan may have an opportunity to find his future elsewhere if he continues to get the type of improvised biographical explosive device shoved under his office door at Google. I want to point out that I thought Dr. Raghavan’s estimate of the cost of search was a hoot. How could he beat that for a joke worthy of Jack Benny?

image

A big dumb bunny gets egged. Thanks, MSFT Copilot. Good enough.

I am referring to “The Man Who Killed Google Search,” written by Edward Zitron. For those to whom Mr. Zitron is not a household name like Febreze air freshener, he is “the CEO of national Media Relations and Public Relations company EZPR, of which I am both the E (Ed) and the Z (Zitron). I host the Better Offline Podcast, coming to iHeartRadio and everywhere else you find your podcasts February 2024.” For more about Mr. Zitron, navigate to this link. (Yep, it takes quite a while to load, but be patient.)

The main point of the write up is that the McKinsey-experienced Sundar Pichai (the other half of the comedy act) hired the article-writing, Verity-seasoned Dr. Raghavan to help steer the finely-crafted corporate aircraft carrier, USS Google into the Sea of Money. Even though, the duo are not very good at comedy, they are doing a bang up job of making the creaking online advertising machine output big money. If you don’t know how big, just check out the earning for the most recent financial quarter at this link. If you don’t want to wade through Silicon Valley jargon, Google is “a two trillion dollar company.” How do you like that, Mr. and Mrs. Traditional Advertising?

The write up is filled with proper names of Googlers past and present. The point is that the comedy duo dumped some individuals who embraced the ethos of the old, engineering-oriented, relevant search results Google. The vacancies were filled with those who could shove more advertising into what once were clean, reasonably well-lighted places. At the same time, carpetland (my term for the executive corridor down which Messrs. Brin and Page once steered their Segways) elevated above the wonky world of the engineers, the programmers, the Ivory Tower thinker types, and outright wonkiness of the advanced research units. (Yes, there were many at one time.)

Using the thought processes of McKinsey (the opioid idea folks) and the elocutionary skills of Dr. Raghavan, Google search degraded while the money continued to flow. The story presented by Mr. Zitron is interesting. I will leave it to you to internalize it and thank your luck stars you are not given the biographical improvised explosive device as a seat cushion. Yowzah.

Several observations:

  1. I am not sure the Sundar & Prabhakar duo wrote the script for the Death of Google Search. Believe me, there were other folks in Google carpetland aiding the process. How about a baby maker in the legal department as an example of ground principles? What about an attempted suicide by a senior senior senior manager’s squeeze? What about a big time thinker’s untimely demise as a result of narcotics administered by a rental female?
  2. The problems at Google are a result of decades of high school science club members acting out their visions of themselves as masters of the universe and a desire to rig the game so money flowed. Cleverness, cute tricks, and owning the casino and the hotel and the parking lot were part of Google’s version of Hotel California. The business set up was money in, fancy dancing in public, and nerdland inside. Management? Hey, math is hard. Managing is zippo.
  3. The competitive arena was not set up for a disruptor like the Google. I do not want to catalog what the company did to capture what appears to be a very good market position in online advertising. After a quarter century, the idea that Google might be an alleged monopoly is getting some attention. But alleged is one thing; change is another.
  4. The innovator’s dilemma has arrived in the lair of Googzilla. After inventing tensors, OpenAI made something snazzy with them and cut a deal with Microsoft. The result was the AI hyper moment with Google viewed as a loser. Forget the money. Google is not able to respond, some said. Perception is important. The PR gaffe in Paris where Dr. Prabhakar showed off Bard outputting incorrect information; the protests and arrests of staff; and the laundry list of allegations about the company’s business practices in the EU are compounding the one really big problem — Google’s ability to control its costs. Imagine. A corporate grunt sport could be the hidden disease. Is Googzilla clear headed or addled? Time will tell I believe.

Net net: The man who killed Google is just an clueless accomplice, not the wizard with the death ray cooking the goose and its eggs. Ultimately, in my opinion, we have to blame the people who use Google products and services, rely on Google advertising, and trust search results. Okay, Dr. Raghavan, suspended sentence. Now you can go build your $300,000 Web search engine. I will be available to evaluate it as I did Search2, Neeva, and the other attempts to build a better Google. Can you do it? Sure, you will be a Xoogler. Xooglers can do anything. Just look at Mr. Brin’s airship. And that egg will wash off unlike that crazy idea to charge Verity customers for each entry in an index passed for each user’s query. And that’s the joke that’s funnier than the Paris bollocksing of smart software. Taxi meter pricing for an in-house, enterprise search system. That is truly hilarious.

Stephen E Arnold, April 30, 2024

The Google Explains the Future of the Google Cloud: Very Googley, Of Course

April 30, 2024

green-dino_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

At its recent Next 24 conference, Google Cloud and associates shared their visions for the immediate future of AI. Through the event’s obscurely named Session Library, one can watch hundreds of sessions and access resources connected to many more. The idea — if you  have not caught on to the Googley nomenclature — is to make available videos of the talks at the conference. To narrow, one can filter by session category, conference track, learning level, solution, industry, topic of interest, and whether video is available. Keep in mind that the words you (a normal human, I presume) may use to communicate your interest may not be the lingo Googzilla speaks. AI and Machine Learning feature prominently. Other key areas include data and databases, security, development and architecture, productivity, and revenue growth (naturally). There is even a considerable nod to diversity, equity, and inclusion (DEI). Okay, nod, nod.

Here are a few session titles from just the “AI and ML” track to illustrate the scope of this event and the available information:

  • A cybersecurity expert’s guide to securing AI products with Google SAIF
  • AI for banking: Streamline core banking services and personalize customer experiences
  • AI for manufacturing: Enhance productivity and build innovative new business models
  • AI for telecommunications: Transform customer interactions and network operations
  • AI in capital markets: The biggest bets in the industry
  • Accelerate software delivery with Gemini and Code Transformations
  • Revolutionizing healthcare with AI
  • Streamlining access to youth mental health services

It looks like there is something for everybody. We think the titles make reasonably clear the scope and bigness of Google’s aspirations. Nor would we expect less from a $2 trillion outfit based on advertising, would we? Run a query for Code Red or in Google lingo CodeRED, and you will be surprised that the state of emergency, Microsoft is a PR king mentality persists. (Is this the McKinsey way?) Well, not for those employed at McKinsey. Former McKinsey professionals have more latitude in their management methods; for example, emulating high school science club planning techniques. There are no sessions we could spot about Google’s competition. If one is big enough, there is no competition. One of Googzilla’s relatives made a mess of Tokyo real estate largely without lasting consequences.

Cynthia Murrell, April 30, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta