Who Is Responsible for Security Problems? Guess, Please

March 28, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

“In my opinion, Zero-Days Exploited in the Wild Jumped 50% in 2023, Fueled by Spyware Vendors” is a semi-sophisticated chunk of content marketing and an example of information shaping. The source of the “report” is Google. The article appears in what was a Google- and In-Q-Tel-backed company publication. The company is named “Recorded Future” and appears to be owned in whole or in part by a financial concern. In a separate transaction, Google purchased a cyber security outfit called Mandiant which provides services to government and commercial clients. This is an interesting collection of organizations and each group’s staff of technical professionals.

image

The young players are arguing about whose shoulders will carry the burden of the broken window. The batter points to the fielder. The fielder points to the batter. Watching are the coaches and team mates. Everyone, it seems, is responsible. So who will the automobile owner hold responsible? That’s a job for the lawyer retained by the entity with the deepest pockets and an unfettered communications channel. Nice work MSFT Copilot. Is this scenario one with which you are familiar?

The article contains what seems to me quite shocking information; that is, companies providing specialized services to government agencies like law enforcement and intelligence entities, are compromising the security of mobile phones. What’s interesting is that Google’s Android software is one of the more widely used “enablers” of what is now a ubiquitous computing device.

I noted this passage:

Commercial surveillance vendors (CSVs) were the leading culprit behind browser and mobile device exploitation, with Google attributing 75% of known zero-day exploits targeting Google products as well as Android ecosystem devices in 2023 (13 of 17 vulnerabilities). [Emphasis added. Editor.]

Why do I find the article intriguing?

  1. This “revelatory” write up can be interpreted to mean that spyware vendors have to be put in some type of quarantine, possibly similar to those odd boxes in airports where people who smoke can partake of potentially harmful habit. In the special “room”, these folks can be monitored perhaps?
  2. The number of exploits parallels the massive number of security breaches create by widely-used laptop, desktop, and server software systems. Bad actors have been attacking for many years and now the sophistication and volume of cyber attacks seems to be increasing. Every few days cyber security vendors alert me to a new threat; for example, entering hotel rooms with Unsaflok. It seems that security problems are endemic.
  3. The “fix” or “remedial” steps involve users, government agencies, and industry groups. I interpret the argument as suggesting that companies developing operating systems need help and possibly cannot be responsible for these security problems.

The article can be read as a summary of recent developments in the specialized software sector and its careless handling of its technology. However, I think the article is suggesting that the companies building and enabling mobile computing are just victimized by bad actors, lousy regulations, and sloppy government behaviors.

Maybe? I believe I will tilt toward the content marketing purpose of the write up. The argument “Hey, it’s not us” is not convincing me. I think it will complement other articles that blur responsibility the way faces are blurred in some videos.

Stephen E Arnold, March 28, 2024

Backpressure: A Bit of a Problem in Enterprise Search in 2024

March 27, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have noticed numerous references to search and retrieval in the last few months. Most of these articles and podcasts focus on making an organization’s data accessible. That’s the same old story told since the days of STAIRS III and other dinobaby artifacts. The gist of the flow of search-related articles is that information is locked up or silo-ized. Using a combination of “artificial intelligence,” “open source” software, and powerful computing resources — problem solved.

image

A modern enterprise search content processing system struggles to keep pace with the changes to already processed content (the deltas) and the flow of new content in a wide range of file types and formats. Thanks, MSFT Copilot. You have learned from your experience with Fast Search & Transfer file indexing it seems.

The 2019 essay “Backpressure Explained — The Resisted Flow of Data Through Software” is pertinent in 2024. The essay, written by Jay Phelps, states:

The purpose of software is to take input data and turn it into some desired output data. That output data might be JSON from an API, it might be HTML for a webpage, or the pixels displayed on your monitor. Backpressure is when the progress of turning that input to output is resisted in some way. In most cases that resistance is computational speed — trouble computing the output as fast as the input comes in — so that’s by far the easiest way to look at it.

Mr. Phelps identifies several types of backpressure. These are:

  1. More info to be processed than a system can handle
  2. Reading and writing file speeds are not up to the demand for reading and writing
  3. Communication “pipes” between and among servers are too small, slow, or unstable
  4. A group of hardware and software components cannot move data where it is needed fast enough.

I have simplified his more elegantly expressed points. Please, consult the original 2019 document for the information I have hip hopped over.

My point is that in the chatter about enterprise search and retrieval, there are a number of situations (use cases to those non-dinobabies) which create some interesting issues. Let me highlight these and then wrap up this short essay.

In an enterprise, the following situations exist and are often ignored or dismissed as irrelevant. When people pooh pooh my observations, it is clear to me that these people have [a] never been subject to a legal discovery process associated with enterprise search fraud and [b] are entitled whiz kids who don’t do too much in the quite dirty, messy, “real” world. (I do like the variety in T shirts and lumberjack shirts, however.)

First, in an enterprise, content changes. These “deltas” are a giant problem. I know that none of the systems I have examined, tested, installed, or advised which have a procedure to identify a change made to a PowerPoint, presented to a client, and converted to an email confirming a deal, price, or technical feature in anything close to real time. In fact, no one may know until the president’s laptop is examined by an investigator who discovers the “forgotten” information. Even more exciting is the opposing legal team’s review of a laptop dump as part of a discovery process “finds” the sequence of messages and connects the dots. Exciting, right. But “deltas” pose another problem. These modified content objects proliferate like gerbils. One can talk about information governance, but it is just that — talk, meaningless jabber.

Second, the content which an employees needs to answer a business question in a timely manner can reside in am employee’s laptop or a mobile phone, a digital notebook, in a Vimeo video or one of those nifty “private” YouTube videos, or behind the locked doors and specialized security systems loved by some pharma company’s research units, a Word document in something other than English, etc. Now the content is changed. The enterprise search fast talkers ignore identifying and indexing these documents with metadata that pinpoints the time of the change and who made it. Is this important? Some contract issues require this level of information access. Who asks for this stuff? How about a COTR for a billion dollar government contract?

Third, I have heard and read that modern enterprise search systems “use”, “apply,” “operate within” industry standard authentication systems. Sure they do within very narrowly defined situations. If the authorization system does not work, then quite problematic things happen. Examples range from an employee’s failure to find the information needed and makes a really bad decision. Alternatively the employee goes on an Easter egg hunt which may or may not work, but if the egg found is good enough, then that’s used. What happens? Bad things can happen? Have you ridden in an old Pinto? Access control is a tough problem, and it costs money to solve. Enterprise search solutions, even the whiz bang cloud centric distributed systems, implement something, which is often not the “right” thing.

Fourth, and I am going to stop here, the problem of end-to-end encrypted messaging systems. If you think employees do not use these, I suggest you do a bit of Eastern egg hunting. What about the content in those systems? You can tell me, “Our company does not use these.” I say, “Fine. I am a dinobaby, and I don’t have time to talk with you because you are so much more informed than I am.”

Why did I romp though this rather unpleasant issue in enterprise search and retrieval? The answer is, “Enterprise search remains a problematic concept.” I believe there is some litigation underway about how the problem of search can morph into a fantasy of a huge business because we have a solution.”

Sorry. Not yet. Marketing and closing deals are different from solving findability issues in an enterprise.

Stephen E Arnold, March 27, 2024

Research into Baloney Uses Four Letter Words

March 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I am critical of university studies. However, I spotted one which strikes as the heart of the Silicon Valley approach to life. “Research Shows That People Who BS Are More Likely to Fall for BS” has an interesting subtitle; to wit:

People who frequently mislead others are less able to distinguish fact from fiction, according to University of Waterloo researchers

image

A very good looking bull spends time reviewing information helpful to him in selling his artificial intelligence system. Unlike the two cows, he does not realize that he is living in a construct of BS. Thanks, MSFT Copilot. How are you doing with those printer woes today? Good enough, I assume.

Consider the headline in the context of promises about technologies which will “change everything.” Examples range from the marvels of artificial intelligence to the crazy assertions about quantum computing. My hunch is that the reason baloney has become one of the most popular mental foods in the datasphere is that people desperately want a silver bullet. Other know that if a silver bullet is described with appropriate language and a bit of sizzle, the thought can be a runway for money.

What’s this mean? We have created a culture in North America that makes “technology” and “glittering generalities” into hyperbole factories.  Why believe me? Let’s look at the “research.”

The write up reports:

People who frequently try to impress or persuade others with misleading exaggerations and distortions are themselves more likely to be fooled by impressive-sounding misinformation… The researchers found that people who frequently engage in “persuasive bullshitting” were actually quite poor at identifying it. Specifically, they had trouble distinguishing intentionally profound or scientifically accurate fact from impressive but meaningless fiction. Importantly, these frequent BSers are also much more likely to fall for fake news headlines.

Let’s think about this assertion. The technology story teller is an influential entity. In the world of AI, for example, some firms which have claimed “quantum supremacy” showcase executives who spin glorious word pictures of smart software reshaping the world. The upsides are magnetic; the downsides dismissed.

What about crypto champions? Telegram, founded by two Russian brothers, are spinning fabulous tales of revenue from advertising in an encrypted messaging system and cheerleading for a more innovative crypto currency. Operating from Dubai, there are true believers. What’s not to like? Maybe these bros have the solution that has long been part of the Harvard winkle confections.

What shocked me about the write up was the use of the word “bullshit.” Here’s an example from the academic article:

“We found that the more frequently someone engages in persuasive bullshitting, the more likely they are to be duped by various types of misleading information regardless of their cognitive ability, engagement in reflective thinking, or metacognitive skills,” Littrell said. “Persuasive BSers seem to mistake superficial profoundness for actual profoundness. So, if something simply sounds profound, truthful, or accurate to them that means it really is. But evasive bullshitters were much better at making this distinction.”

What if the write up is itself BS? What if the journal publishing the article — British Journal of Social Psychology — is BS? On one level, I want to agree that those skilled in the art of baloney manufacturing, distributing, and outputting have a quite specific skill. On the other hand, I admit that I cannot determine at first glance if the information provided is not synthetic, ripped off, shaped, or weaponized. I would assert that most people are not able to identify what is “verifiable”, “an accurate accepted fact”, or “true.”

We live in a post-reality era. When the presidents of outfits like Harvard and Stanford face challenges to their research accuracy, what can I do when confronted with a media release about BS. Upon reflection, I think the generalization that people cannot figure out what’s on point or not is true. When drug store cashiers cannot make change, I think that’s strong anecdotal evidence that other parts of their mental toolkit have broken or missing parts.

But the statement that those who output BS cannot themselves identify BS may be part of a broader educational failure. Lazy people, those who take short cuts, people who know how to do the PT Barnum thing, and sales professionals trying to close a deal reflect a societal issue. In a world of baloney, everything is baloney.

Stephen E Arnold, March 25, 2024

Peak AI? Do You Know What Happened to Catharists? Quiz ChatGPT or Whatever

March 21, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Have We Reached Peak AI?” The question is an interesting one because some alleged monopolies are forming alliances with other alleged monopolies. Plus wonderful managers from an alleged monopoly is joining another alleged monopoly to lead a new “unit” of the alleged monopoly. At the same time, outfits like the usually low profile Thomson Reuters suggested that it had an $8 billion war chest for smart software. My team and I cannot keep up with the announcements about AI in fields ranging from pharma to ransomware from mysterious operators under the control of wizards in China and Russia.

image

Thanks, MSFT Copilot. You did a good job on the dinobabies.

Let’s look at a couple of statements in the essay which addresses the “peak AI” question.

I noticed that OpenAI is identified as an exemplar of a company that sticks to a script, avoids difficult questions, and gets a velvet glove from otherwise pointy fingernailed journalists. The point is well taken; however, attention does not require substance. The essay states:

OpenAI’s messaging and explanations of what its technology can (or will) do have barely changed in the last few years, returning repeatedly to “eventually” and “in the future” and speaking in the vaguest ways about how businesses make money off of — let alone profit from — integrating generative AI.

What if the goal of the interviews and the repeated assertions about OpenAI specifically and smart software in general is publicity and attention. Cut off the buzz for any technology and it loses its shine. Buzz is the oomph in the AI hot house. Who cares about Microsoft buying into games? Now who cares about Microsoft hooking up with OpenAI, Mistral, and Inception? That’s what the meme life delivers. Games, sigh. AI, let’s go and go big.

Another passage in the essay snagged me:

I believe a large part of the artificial intelligence boom is hot air, pumped through a combination of executive bullshitting and a compliant media that will gladly write stories imagining what AI can do rather than focus on what it’s actually doing.

One of my team members accused me of FOMO when I told Howard to get us a Flipper. (Can one steal a car with a Flipper? The answer is, “Not without quite a bit of work.) The FOMO was spot on. I had a “fear of missing out.” Canada wants to ban the gizmos. Hence, my request, “Get me a Flipper.” Most of the “technology” in the last decade is zipping along on marketing, PR, and YouTube videos. (I just watched a private YouTube video about intelware which incorporates lots of AI. Is the product available? Nope. But… soon. Let the marketing and procurement wheels begin turning.)

Journalists (often real ones) fall prey to FOMO. Just as I wanted a Flipper, the “real” journalists want to write about what’s apparently super duper important. The Internet is flagging. Quantum computing is old hat and won’t run in a mobile phone. The new version of Steve Gibson’s Spinrite is not catching the attention of blue chip investment firms. Even the enterprise search revivifier Glean is not magnetic like AI.

The issue for me is more basic than the “Peak AI” thesis; to wit, What is AI? No one wants to define it because it is a bit like “security.” The truth is that AI is a way to make money in what is a fairly problematic economic setting. A handful of companies are drowning in cash. Others are not sure they can keep the lights on.

The final passage I want to highlight is:

Eventually, one of these companies will lose a copyright lawsuit, causing a brutal reckoning on model use across any industry that’s integrated AI. These models can’t really “forget,” possibly necessitating a costly industry-wide retraining and licensing deals that will centralize power in the larger AI companies that can afford them.

I would suggest that Google has already been ensnared by the French regulators. AI faces an on-going flow of legal hassles. These range from cash-starved publishers to the work-from-home illustrator who does drawings for a Six-Flags-Over-Jesus type of super church. Does anyone really want to get on the wrong side of a super church in (heaven forbid) Texas?

I think the essay raises a valid point: AI is a poster child of hype.

However, as a dinobaby, I know that technology is an important part of the post-industrial set up in the US of A. Too much money will be left on the table unless those savvy to revenue flows and stock upsides ignore the mish-mash of AI. In an unregulated setting, people need and want the next big thing. Okay, it is here. Say “hello” to AI.

Stephen E Arnold, March 21, 2024

Want Clicks: Do Sad, Really, Really Sorrowful

March 13, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The US is a hotbed of negative news. It’s what drives the media and perpetuates the culture of fear that (arguably) has plagued the country since colonial times. US citizens and now the rest of the world are so addicted to bad news that a research team got the brilliant idea to study what words people click. Nieman Lab wrote about the study in, “Negative Words In News Headlines Generate More Clicks-But Sad Words Are More Effective Than Angry Or Scary Ones.”

image

Thanks, MSFT Copilot. One of Redmond’s security professionals I surmise?

Negative words are prevalent in headlines because they sell clicks. The Nature Human Behavior(u)r journal published a study called “Negativity Drives Online News Consumption.” The study analyzed the effect of negative and emotional words on news consumption and the research team discovered that negativity increased clickability. These findings also confirm the well-documented behavior of humans seeking negativity in all information-seeking.

It coincides with humanity’s instinct to be vigilant of any danger and avoid it. While humans instinctually gravitate towards negative headlines, certain negative words are more popular than others. Humans apparently are driven to click on sad-related synonyms, avoid anything resembling joy or fear, and angry words don’t have any effect. It all goes back to survival:

“And if we are to believe “Bad is stronger than good” derives from evolutionary psychology — that it arose as a useful heuristic to detect threats in our environment — why would fear-related words reduce likelihood to click? (The authors hypothesize that fear and anger might be more important in generating sharing behavior — which is public-facing — than clicks, which are private.)

In any event, this study puts some hard numbers to what, in most newsrooms, has been more of an editorial hunch: Readers are more drawn to negativity than to positivity. But thankfully, the effect size is small — and I’d wager that it’d be even smaller for any outlet that decided to lean too far in one direction or the other.”

It could also be a strict diet of danger-filled media too.

Whitney Grace, March 13, 2024

In Tech We Mistrust

March 11, 2024

While tech firms were dumping billions into AI, they may have overlooked one key component: consumer faith. The Hill reports, “Trust in AI Companies Drops to 35 Percent in New Study.” We note that 35% figure is for the US only, while the global drop was a mere 8%. Still, that is the wrong direction for anyone with a stake in the market. So what is happening? Writer Filip Timotija tells us:

So it is not just AI we mistrust, it is tech companies as a whole. That tracks. The study polled 32,000 people across 28 countries. Timotija reminds us regulators in the US and abroad are scrambling to catch up. Will fear of consumer rejection do what neither lagging lawmakers nor common decency can? The write-up notes:

“Westcott argued the findings should be a ‘wake up call’ for AI companies to ‘build back credibility through ethical innovation, genuine community engagement and partnerships that place people and their concerns at the heart of AI developments.’ As for the impacts on the future for the industry as a whole, ‘societal acceptance of the technology is now at a crossroads,’ he said, adding that trust in AI and the companies producing it should be seen ‘not just as a challenge, but an opportunity.’” “Multiple factors contributed to the decline in trust toward the companies polled in the data, according to Justin Westcott, Edelman’s chair of global technology. ‘Key among these are fears related to privacy invasion, the potential for AI to devalue human contributions, and apprehensions about unregulated technological leaps outpacing ethical considerations,’ Westcott said, adding ‘the data points to a perceived lack of transparency and accountability in how AI companies operate and engage with societal impacts.’ Technology as a whole is losing its lead in trust among sectors, Edelman said, highlighting the key findings from the study. ‘Eight years ago, technology was the leading industry in trust in 90 percent of the countries we study,’ researchers wrote, referring to the 28 countries. ‘Now it is most trusted only in half.’”

Yes, an opportunity. All AI companies must do is emphasize ethics, transparency, and societal benefits over profits. Surely big tech firms will get right on that.

Cynthia Murrell, March 11, 2024

Google Gems: 21 February 2024

February 21, 2024

Saint Valentine’s Day week bulged with love and kisses from the Google. If I recall what I learned at Duquesne University, Father Valentine was a martyr and checked into heaven in the 3rd century BCE. Figuring out the “real” news about Reverendissimo Padre is not easy, particularly with the advertising-supported Google search. Thus, it is logical that Google would have been demonstrating its love for its “users” with announcements, insights, and news as tokens of affection. I am touched. Let’s take a look at a selected run down of love bonbons.

THE BIG STORY

The Beyond Search team agreed that the big story is part marketing and part cleverness. The Microsofties said that old PCs would become door stops. Millions of Windows users with “old” CPUs and firmware will not work with future updates to Windows. What did Google do? The company announced that it would allow users to use the Chrome OS and continue computing with Google services and features. You can get some details in a Reuters’ story.

1 6 24 gelms

Thanks, MSFT Copilot OpenAI.

AN AMAZING STORY IF ACCURATE

Wired Magazine reported that Google wants to allow its “users” to talk to “live agents.” Does this mean smart software which are purported to be alive or to actual humans (who, one hopes, speak reasonably good English or other languages like Kallawaya.

MANAGEMENT MOVES

I find Google’s management methods fascinating. I like to describe the method as similar to that used by my wildly popular high school science club. Google did not disappoint.

The Seattle Times reports that Google has made those in its Seattle office chilly. You can read about those cutback at this link. Google is apparently still refining its termination procedures.

A Xoogler provided a glimpse of the informed, ethical, sensitive, and respectful tactics Google used when dealing with “real” news organizations. I am not sure if the word “arrogance” is appropriate. It is definitely quite a write up and provides an X-ray of Google’s management precepts in action. You can find the paywalled write up at this link. For whom are the violins playing?

Google’s management decision to publish a report about policeware appears to have forced one vendor of specialized software to close up shop. If you want information about the power of Google’s “analysis and PR machine” navigate to this story.

LITIGATION

New York City wants to sue social media companies for negligence. The Google is unlikely to escape the Big Apple’s focus on the now-noticeable impacts of skipping “real” life for the scroll world. There’s more about this effort in Axios at this link.

An Australian firm has noted that Google may be facing allegations of patent infringement. More about this matter will appear in Beyond Search.

The Google may be making changes to try an ameliorate EU legal action related to misinformation. A flurry of Xhitter posts reveal some information about this alleged effort.

Google seems to be putting a “litigation fence” in place. In an effort to be a great outfit, “Google Launches €25M AI Drive to Empower Europe’s Workforce.” The NextWeb story reports:

The initiative is targeted at “vulnerable and underserved” communities, who Google said risk getting left behind as the use of AI in the workplace skyrockets — a trend that is expected to continue. Google said it had opened applications for social enterprises and nonprofits that could help reach those most likely to benefit from training.  Selected organizations will receive “bespoke and facilitated” training on foundational AI.

Could this be a tactic intended to show good faith when companies terminate employees because smart software like Google’s put individuals out of a job?

INNOVATION

The Android Police report that Google is working on a folding phone. “The Pixel Fold 2’s Leaked Redesign Sees Google Trading Originality for a Safe Bet” explains how “safe” provides insight into the company’s approach to doing “new” things. (Aren’t other mobile phone vendors dropping this form factor?) Other product and service tweaks include:

  1. Music Casting gets a new AI. Read more here.
  2. Google thinks it can imbue self reasoning into its smart software. The ArXiv paper is here.
  3. Gemini will work with headphones in more countries. A somewhat confusing report is at this link.
  4. Forbes, the capitalist tool, is excited that Gmail will have “more” security. The capitalist tool’s perspective is at this link.
  5. Google has been inspired to emulate the Telegram’s edit recent sends. See 9 to 5 Google’s explanation here.
  6. Google has released Goose to help its engineers write code faster. Will these steps lead to terminating less productive programmers?

SMART SOFTWARE

Google is retiring Bard (which some pundits converted to the unpleasant word “barf”). Behold Gemini. The news coverage has been the digital equivalent of old-school carpet bombing. There are many Gemini items. Some have been pushed down in the priority stack because OpenAI rolled out its text to video features which were more exciting to the “real” journalists. If you want to learn about Gemini, its zillion token capability, and the associated wonderfulness of the system, navigate to “Here’s Everything You Need to Know about Gemini 1.5, Google’s Newly Updated AI Model That Hopes to Challenge OpenAI.” I am not sure the article covers “everything.” The fact that Google rolled out Gemini and then updated it in a couple of days struck me as an important factoid. But I am not as informed as Yahoo.

Another AI announcement was in my heart shaped box of candy. Google’s AI wizards made PIVOT public. No, pivot is not spinning; it is Prompting with Iterative Visual Optimization. You can see the service in action in “PIVOT: Iterative Visual Prompting Elicits Actionable Knowledge for VLMs.” My hunch is that PIVOT was going to knock OpenAI off its PR perch. It didn’t. Plus, there is an ArXiv paper authored by Nasiriany, Soroush and Xia, Fei and Yu, Wenhao and Xiao, Ted and Liang, Jacky and Dasgupta, Ishita and Xie, Annie and Driess, Danny and Wahid, Ayzaan and Xu, Zhuo and Vuong, Quan and Zhang, Tingnan and Lee, Tsang-Wei Edward and Lee, Kuang-Huei and Xu, Peng and Kirmani, Sean and Zhu, Yuke and Zeng, Andy and Hausman, Karol and Heess, Nicolas and Finn, Chelsea and Levine, Sergey and Ichter, Brian at this link. But then there is that OpenAI Sora, isn’t there?

Gizmodo’s content kitchen produced a treat which broke one of Googzilla’s teeth. The article “Google and OpenAI’s Chatbots Have Almost No Safeguards against Creating AI Disinformation for the 2024 Presidential Election” explains that Google like other smart software outfits are essentially letting “users” speed down an unlit, unmarked, unpatrolled Information Superhighway.

Business Insider suggests that the Google “Wingman” (like a Copilot. Get the word play?) may cause some people to lose their jobs. Did this just happen in Google’s Seattle office? The “real” news outfit opined that AI tools like Google’s wingman whips up concerns about potential job displacement. Well, software is often good enough and does not require vacations, health care, and effective management guidance. That’s the theory.

Stephen E Arnold, February 21, 2024

Googzilla Takes Another OpenAI Sucker Punch

February 19, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In January 2023, the savvy Googlers woke up to news that Microsoft and OpenAI had seized the initiative in smart software. One can argue the technical merits, but from a PR and marketing angle, the Softies and Sam AI-Man crept upon the World Economic Forum and clubbed the self-confident Googzilla in the cervical spine. The Google did not see that coming.

The somewhat quirky OpenAI has done it again. This time the blow was delivered with a kin geri or, more colloquially, a groin kick. How did Sam AI-Man execute this painful strike? Easy. The company released Sora, a text to video smart software function. “OpenAI’s Sora Generates Photorealistic Videos” reports:

Sora is a generative AI diffusion model. Sora can generate multiple characters, complex backgrounds and realistic-looking movements in videos up to a minute long. It can create multiple shots within one video, keeping the characters and visual style consistent, allowing Sora to be an effective storytelling tool.

Chatter indicates that OpenAI is not releasing a demonstration or a carefully crafted fakey examples. Nope, unlike a certain large outfit with a very big bundle of cash, the OpenAI experts have skipped the demonstrations and gone directly to a release of the service to individuals who will probe the system for safety and good manners.

Could Googzilla be the company which OpenAI intends to drop to its knees? From my vantage point, heck yes. The outputs from the system are not absolutely Hollywood grade, but the examples are interesting and suggest that the Google, when it gets up off the floor, will have to do more.

image

Several observations:

  1. OpenAI is doing a good job with its marketing and PR. Google announces quantum supremacy; OpenAI provides a glimpse of a text to video function which will make game developers, Madison Avenue art history majors, and TikTok pay attention
  2. Google is once again in react mode. I am not sure pumping up the number of tokens in Bard or Gemini or whatever is going to be enough to scrub the Sora and prevent the spread of this digital infection
  3. Googzilla may be like the poor 1950s movie monster who was tamed not by a single blow but by many pesky attacks. I think this approach is called “death by a thousand cuts.”

Net net: OpenAI has pulled up a marketing coup for a second time. Googzilla is ageing, and old often means slow. What is OpenAI’s next marketing play? A Bruce Lee “I am faster than you, big guy” or a Ninja stealth move? Both methods seem to have broken through the GOOG’s defenses.

Stephen E Arnold, February 19, 2024

x

Topicfinder and Its List of Free PR Sites

February 14, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I noted “40+ Free Sites to Post a Company’s Press Release (Updated).” The “news” is that the list has been updated. What makes this list interesting to penny-pinching marketers is that the sites are “free.” However, it is a good idea to read about each site’s options and terms of service.

image

Free can be a powerful magnet. Thanks Google Bard or Gemini or AI Test Kitchen whatever.

The listing is broken into four categories:

  1. The free press release submission list. The sites listed have registration and review processes for obvious reasons; namely, promoting illegal products and services and other content which can spark litigation or retribution. A short annotation accompanies each item.
  2. A list of “niche” free press release sites. The idea is that some free services want a certain type of content; for example, a technical slant or tourist content.
  3. A list of sites which now charge for press release distribution.
  4. A list of dead press release distribution sites.

Is the list comprehensive? No. Plus, release aggregation sites like Newswise are not included.

Several suggestions:

  1. The lists do not include the sometimes “interesting” outfits operating on the margins of the marketing world. One example we researched was the outfit doing business as the icrowdnewswire.
  2. For fee services are useful because a number of these firms have “relationships” with major search engines so that placement is allegedly “guaranteed.” Examples include PRUnderground, Benzinga, and others.
  3. The press release service may not offer a “forever archive”; that is, the press release content is disappeared to either save money or because old content is deemed to have zero click value to the distribution shop.

If you want to give “free” press releases a whirl, Topicfinder’s listing may be a useful starting point. OSINT experts may find some content gems pushed out from these services. Adding these to a watch list may be useful.

Keep in mind that once one registers, a bit of AI orchestration and some ChatGPT-type magic can create a news release blaster. Posting releases one-by-one is very yesterday.

Stephen E Arnold, February 14, 2024

Sales SEO: A New Tool for Hype and Questionable Relevance

February 5, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Search engine optimization is a relevance eraser. Now SEO has arrived for a human. “Microsoft Copilot Can Now Write the Sales Pitch of a Lifetime” makes clear that hiring is going to become more interesting for both human personnel directors (often called chief people officers) and AI-powered résumé screening systems. And for people who are responsible for procurement, figuring out when a marketing professional is tweaking the truth and hallucinating about a product or service will become a daily part of life… in theory.

image

Thanks for the carnival barker image, MSFT Copilot Bing thing. Good enough. I love the spelling of “asiractson”. With workers who may not be able to read, so what? Right?

The write up explains:

Microsoft Copilot for Sales uses specific data to bring insights and recommendations into its core apps, like Outlook, Microsoft Teams, and Word. With Copilot for Sales, users will be able to draft sales meeting briefs, summarize content, update CRM records directly from Outlook, view real-time sales insights during Teams calls, and generate content like sales pitches.

The article explains:

… Copilot for Service for Service can pull in data from multiple sources, including public websites, SharePoint, and offline locations, in order to handle customer relations situations. It has similar features, including an email summary tool and content generation.

Why is MSFT expanding these interesting functions? Revenue. Paying extra unlocks these allegedly remarkable features. Prices range from $240 per year to a reasonable $600 per year per user. This is a small price to pay for an employee unable to craft solutions that sell, by golly.

Stephen E Arnold, February 5, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta