Short Cuts? Nah, Just Business as Usual in the Big Apple Publishing World

June 28, 2024

dinosaur30a_thumb_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

One of my team alerted me to this Fortune Magazine story: “Telegram Has Become the Go-To App for Heroin, Guns, and Everything Illegal. Can Crypto Save It?” The author appears to be Niamh Rowe. I do not know this “real” journalist. The Fortune Magazine write up is interesting for several reasons. I want to share these because if I am correct in my hypotheses, the problems of big publishing extend beyond artificial intelligence.

First, I prepared a lecture about Telegram specifically for several law enforcement conferences this year. One of our research findings was that a Clear Web site, accessible to anyone with an Internet connection and a browser, could buy stolen bank cards. But these ready-to-use bank cards were just bait. The real play was the use of an encrypted messaging service to facilitate a switch to a malware once the customer paid via crypto for a bundle of stolen credit and debit cards. The mechanism was not the Dark Web. The Dark Web is showing its age, despite the wild tales which appear in the online news services and semi-crazy videos on YouTube-type services. The new go-to vehicle is an encrypted messaging service. The information in the lecture was not intended to be disseminated outside of the law enforcement community.

image

A big time “real” journalist explains his process to an old person who lives in the Golden Rest Old Age Home. The old-timer thinks the approach is just peachy-keen. Thanks, MSFT Copilot. Close enough like most modern work.

Second, in my talk I used idiosyncratic lingo for one reason. The coinages and phrases allow my team to locate documents and the individuals who rip off my work without permission.

I have had experience with having my research pirated. I won’t name a major Big Apple consulting firm which used my profiles of search vendors as part of the firm’s training materials. Believe it or not, a senior consultant at this ethics-free firm told me that my work was used to train their new “experts.” Was I surprised? Nope. New York. Consultants. What did I expect? Integrity was not a word I used to describe this Big Apple publishing outfitthen, and it sure isn’t today. The Fortune Magazine article uses my lingo, specifically “superapp” and includes comments which struck my researcher as a coincidental channeling of my observations about an end-to-end encrypted service’s crypto play. Yep, coincidence. No problem. Big time publishing. Eighty-year-old person from Kentucky. Who cares? Obviously not the “real” news professional who is in telepathic communication with me and my study team. Oh, well, mind reading must exist, right?

Third, my team and I are working hard on a monograph about E2EE specifically for law enforcement. If my energy holds out, I will make the report available free to any member of a law enforcement cyber investigative team in the US as well as investigators at agencies in which I have some contacts; for example, the UK’s National Crime Agency, Europol, and Interpol.

I thought (silly me) that I was ahead of the curve as I was with some of my other research reports; for example, in the the year 1995 my publisher released Internet 2000: The Path to the Total Network, then in 2004, my publisher issued The Google Legacy, and in 2006 a different outfit sold out of my Enterprise Search Report. Will I be ahead of the curve with my E2EE monograph? Probably not. Telepathy I guess.

But my plan is to finish the monograph and get it in the hands of cyber investigators. I will continue to be on watch for documents which recycle my words, phrases, and content. I am not a person who writes for a living. I write to share my research team’s findings with the men and women who work hard to make it safe to live and work in the US and other countries allied with America. I do not chase clicks like those who must beg for dollars, appeal to advertisers, and provide links to Patreon-type services.

I have never been interested in having a “fortune” and I learned after working with a very entitled, horse-farm-owning Fortune Magazine writer that I had zero in common with him, his beliefs, and, by logical reasoning, the culture of Fortune Magazine.

My hunch is that absolutely no one will remember where the information in the cited write up with my lingo originated. My son, who owns the DC-based GovWizely.com consulting firm, opined, “I think the story was written by AI.” Maybe I should use that AI and save myself money, time, and effort?

To be frank, I laughed at the spin on the Fortune Magazine story’s interpretation of superapp. Not only does the write up misrepresent what crypto means to Telegram, the superapp assertion is not documented with fungible evidence about how the mechanics of Telegram-anchored crime can work.

Net net: I am 80. I sort of care. But come on, young wizards. Up your game. At least, get stuff right, please.

Stephen E Arnold, June 28, 2024

Thomson Reuters: A Trust Report about Trust from an Outfit with Trust Principles

June 21, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Thomson Reuters is into trust. The company has a Web page called “Trust Principles.” Here’s a snippet:

The Trust Principles were created in 1941, in the midst of World War II, in agreement with The Newspaper Proprietors Association Limited and The Press Association Limited (being the Reuters shareholders at that time). The Trust Principles imposed obligations on Reuters and its employees to act at all times with integrity, independence, and freedom from bias. Reuters Directors and shareholders were determined to protect and preserve the Trust Principles when Reuters became a publicly traded company on the London Stock Exchange and Nasdaq. A unique structure was put in place to achieve this. A new company was formed and given the name ‘Reuters Founders Share Company Limited’, its purpose being to hold a ‘Founders Share’ in Reuters.

Trust nestles in some legalese and a bit of business history. The only reason I mention this anchoring in trust is that Thomson Reuters reported quarterly revenue of $1.88 billion in May 2024, up from $1.74 billion in May 2023. The financial crowd had expected $1.85 billion in the quarter, and Thomson Reuters beat that. Surplus funds makes it possible to fund many important tasks; for example, a study of trust.

image

The ouroboros, according to some big thinkers, symbolizes the entity’s journey and the unity of all things; for example, defining trust, studying trust, and writing about trust as embodied in the symbol.

My conclusion is that trust as a marketing and business principle seems to be good for business. Therefore, I trust, and I am confident that the information in “Global Audiences Suspicious of AI-Powered Newsrooms, Report Finds.” The subject of the trusted news story is the Reuters Institute for the Study of Journalism. The Thomson Reuters reporter presents in a trusted way this statement:

According to the survey, 52% of U.S. respondents and 63% of UK respondents said they would be uncomfortable with news produced mostly with AI. The report surveyed 2,000 people in each country, noting that respondents were more comfortable with behind-the-scenes uses of AI to make journalists’ work more efficient.

To make the point a person working for the trusted outfit’s trusted report says in what strikes me as a trustworthy way:

“It was surprising to see the level of suspicion,” said Nic Newman, senior research associate at the Reuters Institute and lead author of the Digital News Report. “People broadly had fears about what might happen to content reliability and trust.”

In case you have lost the thread, let me summarize. The trusted outfit Thomson Reuters funded a study about trust. The research was conducted by the trusted outfit’s own Reuters Institute for the Study of Journalism. The conclusion of the report, as presented by the trusted outfit, is that people want news they can trust. I think I have covered the post card with enough trust stickers.

I know I can trust the information. Here’s a factoid from the “real” news report:

Vitus “V” Spehar, a TikTok creator with 3.1 million followers, was one news personality cited by some of the survey respondents. Spehar has become known for their unique style of delivering the top headlines of the day while laying on the floor under their desk, which they previously told Reuters is intended to offer a more gentle perspective on current events and contrast with a traditional news anchor who sits at a desk.

How can one not trust a report that includes a need met by a TikTok creator? Would a Thomson Reuters’ professional write a news story from under his or her desk or cube or home office kitchen table?

I think self funded research which finds that the funding entity’s approach to trust is exactly what those in search of “real” news need. Wikipedia includes some interesting information about Thomson Reuters in its discussion of the company in the section titled “Involvement in Surveillance.” Wikipedia alleges that Thomson Reuters licenses data to Palantir Technologies, an assertion which if accurate I find orthogonal to my interpretation of the word “trust.” But Wikipedia is not Thomson Reuters.

I will not ask questions about the methodology of the study. I trust the Thomson Reuters’ professionals. I will not ask questions about the link between revenue and digital information. I have the trust principles to assuage any doubt. I will not comment on the wonderful ouroboros-like quality of an enterprise embodying trust, funding a study of trust, and converting those data into a news story about itself. The symmetry is delicious and, of course, trustworthy. For information about Thomson Reuters’s trust use of artificial intelligence see this Web page.

Stephen E Arnold, June 21, 2024

Does Google Follow Its Own Product Gameplan?

June 5, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

If I were to answer the question based on Google’s AI summaries, I would say, “Nope.” The latest joke added to the Sundar & Prabhakar Comedy Show is the one about pizza. Here’s the joke if I recall it correctly.

Sundar: Yo, Prabhakar, how do you keep cheese from slipping off a hot pizza?

Prabhakar: I don’t know. Please, tell me, oh gifted one.

Sundar: You have your cook mix it with non-toxic glue, faithful colleague.

Prabhakar: [Laughing loudly]. That’s a good one, luminescent soul.

Did Google muff the bunny with its high-profile smart software feature? To answer the question, I looked to the ever-objective Fast Company online publication. I found a write which appears to provide some helpful information. The article is called “Conduct Stellar User Research Even Faster with This Google Ventures Formula.” Google has game plans for creating MVPs or minimum viable products.

image

The confident comedians look concerned when someone in the audience throws a large tomato at the well-paid performers. Thanks, MSFT. Working on security or the AI PC today?

Let’s look at what one Google partner reveals as the equivalent of the formula for Coca-Cola or McDonald’s recipe for Big Mac sauce.

Here’s the game winning touchdown razzle dazzle:

  1. Use a bullseye customer sprint. The idea is to get five “customers” and show them three prototypes. Listen for pros and cons. Then debrief together in a “watch party.”
  2. Conduct sprints early. The idea is to get this feedback before “a team invests a lot of time, money, or reputational risk into building, launching, and marketing an MVP (that’s a minimum viable product, not necessarily a good or needed product I think).
  3. Keep research bite size. Avoid heavy duty research overkill is the way I interpret the Google speak. The idea is that massive research projects are not desirable. They are work. Nibble, don’t gobble, I assume.
  4. Keep the process simple. Keep the prototypes simple. Get those interviews. That’s fun. Plus, there is the “watch party”, remember?

Okay, now let’s think about what Google suggests are outliers or fiddled AI results. Why is Google AI telling people to eat a rock a day?

The “bullseye” baloney is bull output for sure. I am on reasonably firm ground because in Paris the Sundar & Prabhakar Comedy Act showed incorrect outputs from Google’s AI system. Then Google invented about a dozen variations on the theme of a scrambled egg at Google I/O. Now Google is faced with its AI system telling people dogs own hotels. No, some dogs live in hotels. Some dogs deliver outputs in hotels. Dogs do not own hotels unless it is in a crazy virtual reality headset created by Apple or Meta.

The write up uses the word “stellar” to describe this MVP product stuff. The reality is that Googlers are creating work for themselves. Listening to “customers” who know little about AI or anything other than buy ad-get traffic. The “stellar” part of the title is like the “quantum supremacy” horse feather assertion the company crafted.

Smart software can, when trained and managed, can do some useful things. However, the bullseye and quantum supremacy stuff is capable of producing social media memes, concern among some stakeholders, and evidence that Google cannot do anything useful at this time.

Maybe the company will get its act together? When it does, I will check out the next Sundar & Prabhakar Comedy Act. Maybe some of the jokes will work? Let’s hope they are more effective than the bull’s-eye method. (Sorry. I had to fix up the spelling, Google.)

Stephen E Arnold, June 5, 2024

AI Will Not Definitely, Certainly, Absolute Not Take Some Jobs. Whew. That Is News

June 3, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Outfits like McKinsey & Co. are kicking the tires of smart software. Some bright young sprouts I have heard arrive with a penchant for AI systems to create summaries and output basic information on a subject the youthful masters of the universe do not know. Will consulting services firms, publishers, and customer service outfits embrace smart software? The answer is, “You bet your bippy.”

“Why?” Answer: Potential cost savings. Humanoids require vacations, health care, bonuses, pension contributions (ho ho ho), and an old-fashioned and inefficient five-day work week.

image

Cost reductions over time, cost controls in real time, and more consistent outputs mean that as long as smart software is good enough, the technologies will go through organizations with more efficiency than Union General William T. Sherman led some 60,000 soldiers on a 285-mile march from Atlanta to Savannah, Georgia. Thanks, MSFT Copilot. Working on security today?

Software is allegedly better, faster, and cheaper. Software, particularly AI, may not be better, faster, or cheaper. But once someone is fired, the enthusiasm to return to the fold may be diminished. Often the response is a semi-amusing and often negative video posted on social media.

Here’s Why AI Probably Isn’t Coming for Your Job Anytime Soon” disagrees with my fairly conservative prediction that consulting, publishing, and some service outfits will be undergoing what I call “humanoid erosion” and “AI accretion.” The write up asserts:

We live in an age of hyper specialization. This is a trend that’s been evolving for centuries. In his seminal work, The Wealth of Nations (written within months of the signing of the Declaration of Independence), Adam Smith observed that economic growth was primarily driven by specialization and division of labor. And specialization has been a hallmark of computing technology since its inception. Until now. Artificial intelligence (AI) has begun to alter, even reverse, this evolution.

Okay, Econ 101. Wonderful. But… and there are some, of course. the write up says:

But the direction is clear. While society is moving toward ever more specialization, AI is moving in the opposite direction and attempting to replicate our greatest evolutionary advantage—adaptability.

Yikes. I am not sure that AI is going in any direction. Senior managers are going toward reducing costs. “Good enough,” not excellence, is the high-water mark today.

Here’s another “but”:

But could AI take over the bulk of legal work or is there an underlying thread of creativity and judgment of the type only speculative super AI could hope to tackle? Put another way, where do we draw the line between general and specific tasks we perform? How good is AI at analyzing the merits of a case or determining the usefulness of a specific document and how it fits into a plausible legal argument? For now, I would argue, we are not even close.

I don’t remember much about economics. In fact, I only think about economics in terms of reducing costs and having more money for myself. Good old Adam wrote:

Wherever there is great property there is great inequality. For one very rich man, there must be at least five hundred poor, and the affluence of the few supposes the indigence of the many.

When it comes to AI, inequality is baked in. The companies that are competing fiercely to dominate the core technology are not into equality. The senior managers who want to reduce costs associated with publishing, writing consulting reports based on business school baloney, or reviewing documents hunting for nuggets useful in a trial. AI is going into these and similar knowledge professions. Most of those knowledge workers will have an opportunity to find their future elsewhere. But what about in-take professionals in hospitals? What about dispatchers at trucking companies? What about government citizen service jobs? Sorry. Software is coming. Companies are developing orchestrator software to allow smart software to function across multiple related and inter-related tasks. Isn’t that what most work in a many organizations is?

Here’s another test question from Econ 101:

Discuss the meaning of “It was not by gold or by silver, but by labor, that all wealth of the world was originally purchased.” Give examples of how smart software will replace labor and generate more money for those who own the rights to digital gold or silver.

Send me you blue book answers within 24 hours. You must write in legible cursive. You are not permitted to use artificial intelligence in any form to answer this question which counts for 95 percent of your grade in Economics 102: Work in the Age of AI.

Stephen E Arnold, June 3, 2024

In the AI Race, Is Google Able to Win a Sprint to a Feature?

May 31, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

One would think that a sophisticated company with cash and skilled employees would avoid a mistake like shooting the CEO in the foot. The mishap has occurred again, and if it were captured in a TikTok, it would make an outstanding trailer for the Sundar & Prabhakar reprise of The Greatest Marketing Mistakes of the Year.

image

At age 25, which is quite the mileage when traveling on the Information Superhighway, the old timer is finding out that younger, speedier outfits may win a number of AI races. In the illustration, the Google runner seems stressed at the start of the race. Will the geezer win? Thanks, MidJourney. Good enough, which is the benchmark today I fear.

Google Is Taking ‘Swift Action’ to Remove Inaccurate AI Overview Responses” explains that Google rolled out with some fanfare its AI Overviews. The idea is that smart software would just provide the “user” of the Google ad delivery machine with an answer to a query. Some people have found that the outputs are crazier than one would expect from a Big Tech outfit. The article states:

… Google says, “The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web. Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce. “We conducted extensive testing before launching this new experience, and as with other features we’ve launched in Search, we appreciate the feedback,” Google adds. “We’re taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out.”

But others are much kinder. One notable example is Mashable’s “We Gave Google’s AI Overviews the Benefit of the Doubt. Here’s How They Did.” This estimable publication reported:

Were there weird hallucinations? Yes. Did they work just fine sometimes? Also yes.

The write up noted:

AI Overviews were a little worse in most of my test cases, but sometimes they were perfectly fine, and obviously you get them very fast, which is nice. The AI hallucinations I experienced weren’t going to steer me toward any danger.

Let’s step back and view the situation via several observations:

  1. Google’s big moment becomes a meme cemented to glue on pizza
  2. Does Google have a quality control process which flags obvious gaffes? Apparently not.
  3. Google management seems to suggest that humans have to intervene in a Google “smart” process. Doesn’t that defeat the purpose of using smart software to replace some humans?

Net net: The Google is ageing, and I am not sure a singularity will offset these quite obvious effects of ageing, slowed corporate processes, and stuttering synapses in the revamped AI unit.

Stephen E Arnold, May 31, 2024

AI and the Workplace: Change Will Happen, Just Not the Way Some Think

May 15, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “AI and the Workplace.” The essay contains observations related to smart software in the workplace. The idea is that employees who are savvy will experiment and try to use the technology within today’s work framework. I think that will happen just as the essay suggests. However, I think there is a larger, more significant impact that is easy to miss. Looking at today’s workplace is missing a more significant impact. Employees either [a] want to keep their job, [b] gain new skills and get a better job, or [c] quit to vegetate or become an entrepreneur. I understand.

The data in the report make clear that some employees are what I call change flexible; that is, these motivated individuals differentiate from others at work by learning and experimenting. Note that more than half the people in the “we don’t use AI” categories want to use AI.

image

These data come from the cited article and an outfit called Asana.

The other data in the report. Some employees get a productivity boost; others just chug along, occasionally getting some benefit from AI. The future, therefore, requires learning, double checking outputs, and accepting that it is early days for smart software. This makes sense; however, it misses where the big change will come.

In my view, the major shift will appear in companies founded now that AI is more widely available. These organizations will be crafted to make optimal use of smart software from the day the new idea takes shape. A new news organization might look like Grok News (the Elon Musk project) or the much reviled AdVon. But even these outfits are anchored in the past. Grok News just substitutes smart software (which hopefully will not kill its users) for old work processes and outputs. AdVon was a “rip and replace” tool for Sports Illustrated. That did not go particularly well in my opinion.

The big job impact will be on new organizational set ups with AI baked in. The types of people working at these organizations will not be from the lower 98 percent of the work force pool. I think the majority of employees who once expected to work in information processing or knowledge work will be like a 58 year old brand manager at a vape company. Job offers will not be easy to get and new companies might opt for smart software and search engine optimization marketing. How many workers will that require? Maybe zero. Someone on Fiverr.com will do the job for a couple of hundred dollars a month.

In my view, new companies won’t need workers who are not in the top tier of some high value expertise. Who needs a consulting team when one bright person with knowledge of orchestrating smart software is able to do the work of a marketing department, a product design unit, and a strategic planning unit? In fact, there may not be any “employees” in the sense of workers at a warehouse or a consulting firm like Deloitte.

Several observations are warranted:

  1. Predicting downstream impacts of a technology unfamiliar to a great many people is tricky and sometimes impossible. Who knew social media would spawn a renaissance in getting tattooed?
  2. Visualizing how an AI-centric start up is assembled is a challenge? I submit it won’t look like an insurance company today. What’s a Tesla repair station look like? The answer, “Not much.”
  3. Figuring out how to be one of the elite who gets a job means being perceived as “smart.” Unlike Alina Habba, I know that I cannot fake “smart.” How many people will work hard to maximize the return on their intelligence? The answer, in my experience, is, “Not too many, dinobaby.”

Looking at the future from within the framework of today’s datasphere distorts how one perceives impact. I don’t know what the future looks like, but it will have some quite different configurations than the companies today have. The future will arrive slowly and then it becomes the foundation of a further evolution. What’s the grandson of tomorrow’s AI firm look like? Beauty will be in the eye of the beholder.

Net net: Where will the never-to-be-employed find something meaningful to do?

Stephen E Arnold, May 15, 2024

Taming AI Requires a Combo of AskJeeves and Watson Methods

April 15, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I spotted a short item called “A Faster, Better Way to Prevent an AI Chatbot from Giving Toxic Responses.” The operative words from my point of view are “faster” and “better.” The write up reports (with a serious tone, of course):

Teams of human testers write prompts aimed at triggering unsafe or toxic text from the model being tested. These prompts are used to teach the chatbot to avoid such responses.

Yep, AskJeeves created rules. As long as the users of the system asked a question for which there was a rule, the helpful servant worked; for example, What’s the weather in San Francisco? However, ask a question for which there was no rule, what happens? The search engine reality falls behind the marketing juice and gets shopped until a less magical version appears as Ask.com. And then there is IBM Watson. That system endeared itself to groups of physicians who were invited to answer IBM “experts’” questions about cancer treatments. I heard when Watson was in full medical-revolution mode that some docs in a certain Manhattan hospital used dirty words to express his view about the Watson method. Rumor or actual factual? I don’t know, but involving humans in making software smart can be fraught with challenges: Managerial and financial to name but two.

image

The write up says:

Researchers from Improbable AI Lab at MIT and the MIT-IBM Watson AI Lab used machine learning to improve red-teaming. They developed a technique to train a red-team large language model to automatically generate diverse prompts that trigger a wider range of undesirable responses from the chatbot being tested. They do this by teaching the red-team model to be curious when it writes prompts, and to focus on novel prompts that evoke toxic responses from the target model. The technique outperformed human testers and other machine-learning approaches by generating more distinct prompts that elicited increasingly toxic responses. Not only does their method significantly improve the coverage of inputs being tested compared to other automated methods, but it can also draw out toxic responses from a chatbot that had safeguards built into it by human experts.

How much improvement? Does the training stick or does it demonstrate that charming “Bayesian drift” which allows the probabilities to go walk-about, nibble some magic mushrooms, and generate fantastical answers? How long did the process take? Was it iterative? So many questions, and so few answers.

But for this group of AI wizards, the future is curiosity-driven red-teaming. Presumably the smart software will not get lost, suffer heat stroke, and hallucinate. No toxicity, please.

Stephen E Arnold, April 15, 2024

Are Experts Misunderstanding Google Indexing?

April 12, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Google is not perfect. More and more people are learning that the mystics of Mountain View are working hard every day to deliver revenue. In order to produce more money and profit, one must use Rust to become twice as wonderful than a programmer who labors to make C++ sit up, bark, and roll over. This dispersal of the cloud of unknowing obfuscating the magic of the Google can be helpful. What’s puzzling to me is that what Google does catches people by surprise. For example, consider the “real” news presented in “Google Books Is Indexing AI-Generated Garbage.” The main idea strikes me as:

But one unintended outcome of Google Books indexing AI-generated text is its possible future inclusion in Google Ngram viewer. Google Ngram viewer is a search tool that charts the frequencies of words or phrases over the years in published books scanned by Google dating back to 1500 and up to 2019, the most recent update to the Google Books corpora. Google said that none of the AI-generated books I flagged are currently informing Ngram viewer results.

image

Thanks, Microsoft Copilot. I enjoyed learning that security is a team activity. Good enough again.

Indexing lousy content has been the core function of Google’s Web search system for decades. Search engine optimization generates information almost guaranteed to drag down how higher-value content is handled. If the flagship provides the navigation system to other ships in the fleet, won’t those vessels crash into bridges?

In order to remediate Google’s approach to indexing requires several basic steps. (I have in various ways shared these ideas with the estimable Google over the years. Guess what? No one cared, understood, and if the Googler understood, did not want to increase overhead costs. So what are these steps? I shall share them:

  1. Establish an editorial policy for content. Yep, this means that a system and method or systems and methods are needed to determine what content gets indexed.
  2. Explain the editorial policy and what a person or entity must do to get content processed and indexed by the Google, YouTube, Gemini, or whatever the mystics in Mountain View conjure into existence
  3. Include metadata with each content object so one knows the index date, the content object creation date, and similar information
  4. Operate in a consistent, professional manner over time. The “gee, we just killed that” is not part of the process. Sorry, mystics.

Let me offer several observations:

  1. Google, like any alleged monopoly, faces significant management challenges. Moving information within such an enterprise is difficult. For an organization with a Foosball culture, the task may be a bit outside the wheelhouse of most young people and individuals who are engineers, not presidents of fraternities or sororities.
  2. The organization is under stress. The pressure is financial because controlling the cost of the plumbing is a reasonably difficult undertaking. Second, there is technical pressure. Google itself made clear that it was in Red Alert mode and keeps adding flashing lights with each and every misstep the firm’s wizards make. These range from contentious relationships with mere governments to individual staff member who grumble via internal emails, angry Googler public utterances, or from observed behavior at conferences. Body language does speak sometimes.
  3. The approach to smart software is remarkable. Individuals in the UK pontificate. The Mountain View crowd reassures and smiles — a lot. (Personally I find those big, happy looks a bit tiresome, but that’s a dinobaby for you.)

Net net: The write up does not address the issue that Google happily exploits. The company lacks the mental rigor setting and applying editorial policies requires. SEO is good enough to index. Therefore, fake books are certainly A-OK for now.

Stephen E Arnold, April 12, 2024

Information: Cheap, Available, and Easy to Obtain

April 9, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I worked in Sillycon Valley and learned a few factoids I found somewhat new. Let me highlight three. First, a person with whom my firm had a business relationship told me, “Chinese people are Chinese for their entire life.” I interpreted this to mean  that a person from China might live in Mountain View, but that individual had ties to his native land. That makes sense but, if true, the statement has interesting implications. Second, another person told me that there was a young person who could look at a circuit board and then reproduce it in sufficient detail to draw a schematic. This sounded crazy to me, but the individual took this person to meetings, discussed his company’s interest in upcoming products, and asked for briefings. With the delightful copying machine in tow, this person would have information about forthcoming hardware, specifically video and telecommunications devices. And, finally, via a colleague I learned of an individual who was a naturalized citizen and worked at a US national laboratory. That individual swapped hard drives in photocopy machines and provided them to a family member in his home town in Wuhan. Were these anecdotes true or false? I assumed each held a grain of truth because technology adepts from China and other countries comprised a significant percentage of the professionals I encountered.

image

Information flows freely in US companies and other organizational entities. Some people bring buckets and collect fresh, pure data. Thanks, MSFT Copilot. If anyone knows about security, you do. Good enough.

I thought of these anecdotes when I read an allegedly accurate “real” news story called “Linwei Ding Was a Google Software Engineer. He Was Also a Prolific Thief of Trade Secrets, Say Prosecutors.” The subtitle is a bit more spicy:

U.S. officials say some of America’s most prominent tech firms have had their virtual pockets picked by Chinese corporate spies and intelligence agencies.

The write up, which may be shaped by art history majors on a mission, states:

Court records say he had others badge him into Google buildings, making it appear as if he were coming to work. In fact, prosecutors say, he was marketing himself to Chinese companies as an expert in artificial intelligence — while stealing 500 files containing some of Google’s most important AI secrets…. His case illustrates what American officials say is an ongoing nightmare for U.S. economic and national security: Some of America’s most prominent tech firms have had their virtual pockets picked by Chinese corporate spies and intelligence agencies.

Several observations about these allegedly true statements are warranted this fine spring day in rural Kentucky:

  1. Some managers assume that when an employee or contractor signs a confidentiality agreement, the employee will abide by that document. The problem arises when the person shares information with a family member, a friend from school, or with a company paying for information. That assumption underscores what might be called “uninformed” or “naive” behavior.
  2. The language barrier and certain cultural norms lock out many people who assume idle chatter and obsequious behavior signals respect and conformity with what some might call “US business norms.” Cultural “blindness” is not uncommon.
  3. Individuals may possess technical expertise unknown to colleagues and contracting firms offering body shop services. Armed with knowledge of photocopiers in certain US government entities, swapping out a hard drive is no big deal. A failure to appreciate an ability to draw a circuit leads to similar ineptness when discussing confidential information.

America operates in a relatively open manner. I have lived and worked in other countries, and that openness often allows information to flow. Assumptions about behavior are not based on an understanding of the cultural norms of other countries.

Net net: The vulnerability is baked in. Therefore, information is often easy to get, difficult to keep privileged, and often aided by companies and government agencies. Is there a fix? No, not without a bit more managerial rigor in the US. Money talks, moving fast and breaking things makes sense to many, and information seeps, maybe floods, from the resulting cracks.  Whom does one trust? My approach: Not too many people regardless of background, what people tell me, or what I believe as an often clueless American.

Stephen E Arnold, April 9, 2024

AI and Job Wage Friction

April 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read again “The Jobs Being Replaced by AI – An Analysis of 5M Freelancing Jobs,” published in February 2024 by Bloomberg (the outfit interested in fiddled firmware on motherboards). The main idea in the report is that AI boosted a number of freelance jobs. What are the jobs where AI has not (as yet) added friction to the money making process. Here’s the list of jobs NOT impeded by smart software:

Accounting

Backend development

Graphics design

Market research

Sales

Video editing and production

Web design

Web development

Other sources suggest that “Accounting” may be targeted by an AI-powered efficiency expert. I want to watch how this profession navigates the smart software in what is often a repetitive series of eye glazing steps.

image

Thanks, MSFT Copilot. How are doing doing with your reorganization? Running smoothly? Yeah. Smoothly.

Now to the meat of the report: What professions or jobs were the MOST affected by AI. From the cited write up, these are:

Customer service (the exciting, long suffering discipline of chatbots)

Social media marketing

Translation

Writing

The write up includes another telling chunk of data. AI has apparently had an impact on the amount of money some customers were willing to pay freelancers or gig workers. The jobs finding greater billing friction are:

Backend development

Market research

Sales

Translation

Video editing and production

Web development

Writing

The article contains quite a bit of related information. Please, consult the original for a number of almost unreadable graphics and tabular data. I do want to offer several observations:

  1. One consequence of AI, if the data in this report are close enough for horseshoes, is that smart software drives down what customers will pay for a wide range of human centric services. You don’t lose your job; you just get a taste of Victorian sweat shop management thinking
  2. Once smart software is perceived as reasonably capable, demand and pay for good enough translation, smart software is embraced. My view is that translation services are likely to be a harbinger of how AI will affect other jobs. AI does not have to be great; it just has to be perceived as okay. Then. Bang. Hasta la vista human translators except for certain specialized functions.
  3. Data like the information in the Bloomberg article provide a handy road map for AI developers. The jobs least affected by AI become targets for entrepreneurs who find that low-hanging fruit like translation have been picked. (Accountants, I surmise, should not relax to much.)

Net net: The wage suppression angle and the incremental adoption of AI followed by quick adoption are important ideas to consider when analyzing the economic ripples of AI.

Stephen E Arnold, April 1, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta