Marketing Craziness Okay or Not? Socks Not Software May Provide Some Answers

July 27, 2022

I recall reading about a mid tier consulting firm which “discovered” via real mostly research that software may not work. The Powerpoints and the demos explain the big rock candy mountain world. Then the software arrives, and one gets some weird treat enjoyed east of Albania or north of Nunavut. Companies may sue software vendors, but those trials sort of whimper and die. I mean software. Obviously;y it does not work.

But socks or sox as some prefer are different.

I read “Bass Pro Getting Sued for Not Honoring Guarantee for “Redhead Lifetime Guarantee All-Purpose Wool Socks.” Yeah, socks. The write up states:

If a company puts “Lifetime Guarantee” into the name of one of its products, you would expect the product to have a lifetime guarantee. But in the case of Bass Pro, Lifetime Guarantee is apparently shorthand for “If your lifetime guarantee socks fail we will replace them with an inferior sock with a 60 day guarantee.” A man who bought a bunch of “Redhead Lifetime Guarantee All-Purpose Wool Socks” is now suing Bass Pro for being deceptive.

What about the unlimited data offered by major US telecommunications companies. How did that work out? My recollection is that “unlimited” means “limited.” Plus, the telcos can change the rules and the rates with some flexibility. What about Internet Service Providers selling 200 megabits per second and delivering on a good day maybe 30 mbs if that?

The answer is pretty clear to me. Big companies define their marketing baloney to mean whatever benefits them.

Will the socks or sox matter resolve the issue?

Sure. The consumer is king in the land of giant companies. If you want your software to work, don’t use it. If you want hole free socks, don’t wear them.

Simple fix which regulatory agencies are just thrilled to view as logical and harmless. Those guarantees were crafted by a 23 year old music theory major who specializes in 16th century religious music. What does that person know about software or socks?

Stephen E Arnold, July 27, 2022

Intel Horse Feathers: The Graphics Edition

July 26, 2022

Intel, famous for its remarkable quantum facilitator chip, is back in the horse feathers’ news. I read the allegedly spot on “Intel Won’t Be Troubling Nvidia This Year, Because the Arc A780 GPU Never Existed.” I don’t get to excited about graphics cards. The ones we use are stable and good enough (that’s the benchmark for excellence these days). The write up is more interested in this branch of video razzle dazzle, however. I noted this statement in the cited article about a wonder product from the Intel Inside folks:

Ryan Shrout, who handles Intel’s graphics marketing, has confirmed via Twitter that there isn’t an incoming A780 card – and not only that, but he also claims that Intel never even had plans to make one.

The former podcaster apparently knows when horse featherism must be addressed. How? Via Twitter!

image

What I find interesting is that assertions abound. Many of these sell products, licenses, and services which are marketing centric. My perception is that a desire to capture mind share takes precedence over reality.

I think part of the problem is sparked by insecurity or belief that publicity can make up for delivering something that solves a problem. Intel is going to build or was thinking about building big semiconductor fabs in a state which faces some water challenges. Next up was a build out in Ohio, just not too close to the big river. Plus we have the horse thing.

As TSMC and others move forward with 3 nm chips, Intel relies on a former podcaster and a tweet to make clear. Yeah, no A780. Credibility? Absolutely.

But a tweet? Very 2022.

Stephen E Arnold, July 26, 2022

Jargon Changes More Rapidly Than Search And Retrieval

July 22, 2022

Oh boy! There is a new term in the search and retrieval lexicon: neural search. While the term sounds like a search engine for telepaths or something a cyborg and/or android would use, Martech Series explained that it is something completely different: “Sinequa Adds Industry-Leading Neural Search Capabilities To Its Search Cloud Platform.”

Sinequa is an enterprise search leader and it recently announced the addition of advanced neural search capabilities to its Search Cloud Platform. The upgrade promises to provide unprecedented relevance, accuracy, etc. Sinequa is the first company to offer neural search in four deep learning language models commercially. The models are pre-trained with a combination of Sinequa’s trademark NLP and semantic search.

Search engines used neural search models for years, but they were not cost-effective for enterprise systems:

“Neural search models have been used in internet searches by Google and Bing since 2019, but computing requirements rendered them too costly and slow for most enterprises, especially at production scale. Sinequa optimized the models and collaborated with the Microsoft Azure and NVIDIA AI/ML teams to deliver a high performance, cost-efficient infrastructure to support intensive Neural Search workloads without a huge carbon footprint. Neural Search is optimized for Microsoft Azure and the latest NVIDIA A10 or A100 Tensor Core GPUs to efficiently process large amounts of unstructured data as well as user queries.”

Wonderful for Sinequa! Search and retrieval, especially in foreign languages are some of the biggest time wasters in productivity. Hopefully, Sinequa actually delivers an industry changing product, otherwise, they simply added more jargon to the tech glossary.

Whitney Grace, July 22, 2022

Meta or Zuckbook: A Look Back to 2020 and 2021 and Years of Human Rights and Other Stuff Progress

July 18, 2022

Meta or the Zuckbook is into human rights. The evidence is a free 83 page report called “Meta Human Rights Report: Insights and Actions 2020-2021.” The document covers in order of presentation:

An Executive Summary (~ seven pages)

Meta’s Human Rights Work in Practice (~ two pages)

Table of Contents with the book beginning on page 13 (yeah, I wondered about the numbering too)

Human Rights Policy Timeline

Part 1: Meta’s Human Rights Commitments (~ 11 pages)

Part 2: Meta’s Human Rights Policy in Practice (28 to 82 or 54 pages)

A Final Note.

The content of the report is interesting. I found a couple of statements which caused me to take up my trusty True Blue color marker. May I share what I circled?

We seek to embed our commitments in a governance model which supports integration of our human rights work with ongoing activities and policies on civil rights and Environmental, Social and Corporate Governance (ESG) efforts, as part of the company’s culture, governance, decision making processes and communication strategies.

Seek and you will find I suppose.

Simply put — we seek to translate human rights guidance into meaningful action, every day.

Yep, another notable “seek.”

In these circumstances we seek to promote international human rights standards by engaging with governments, and by collaborating with other stakeholders and companies.

Okay, seek. How about a quick visit to FreeThesaurus.com for some help?

We also have technical mechanisms in place to mitigate and prevent third parties from accessing data from Meta, through proactive and reactive measures like prevention, deterrence, detection and enforcement.

Do Israeli intelware companies have systems  and methods to obviate these super duper data slurpers? “Senator, that you for the question. I will send that information to your office” may be the response to a Congressional questioner.

I enjoyed this quote from the sci fi icon Isaac Asimov:

“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”

Here’s my take on Facebook-type social media:

Nothing tears apart a society than ill-managed, ad-centric social media.

I am not Isaac Asimov, but I think I am correct in my observation. Enjoy the “looking back” report from the estimable virtual reality social ad selling heir to MySpace and Friendster. Will Facebook share a similar fate? Gee, I hope not. I am interested in learning if Isaac Asimov’s quote applies to Facebook in 2022:

You don’t need to predict the future. Just choose a future — a good future, a useful future — and make the kind of prediction that will alter human emotions and reactions in such a way that the future you predicted will be brought about. Better to make a good future than predict a bad one.

Did the Meta Zuck thing predict I would sit in my chair with a headset on, interacting with what may or may not be humans, instead of meeting in a coffee shop or an office conference room and talking to a live person? What’s up in 2022? Wow, more Zuckster stuff.

Stephen E Arnold, July xx, 2022

Google Smart Software: Is This an Important Statement?

July 15, 2022

I read “Human-Centered Mechanism Design with Democratic AI” continues Google’s PR campaign to win the smart software war. Like the recent interview with a DeepMind executive on the exciting Lex Fridman podcast and YouTube programs, the message is clear: The Google’s smart software is not “alive”. (Interesting PR speak begins about 1 hours and 20 minutes into the two hour plus interview. The subtext is in my opinion, “Nope, no problem. We have smart software lassoed with our systems and methods.” Okay, I think I understand framing, filtering, and messaging designed to permit online advertising to be better, faster, and maybe cheaper.

This most recent contribution uses language many previous Googley papers do not; for example, “human” and “democratic.”  The article includes graphics which I must confess I found a bit difficult to figure out. Here’s an illustrative image which baffled me:

image

The Google and its assorted legal eagles created this image from the data mostly referenced in the cited article. Yes, Google and attendant legal eagles, you are the ultimate authorities for this image from the cited article in Nature.

Those involved with business intelligence will marvel at Google’s use of different types of visualizations to make absolutely crystal clear the researchers’ insights, findings, and data.

Great work.

I did note one passage on page nine of the Nature article:

image

Here is the operative language used to explain some of the democratic methods:

Defined

Objective

We wished to maximize

Estimated

Auto-differentiating

We chose

Net net: Researchers at the Google determine and then steer the system. Human-centered design meshes with the Snorkel and synthetic data methods I presume. And bias? Where there are humans, there may be bias. How human-centered were the management decisions about certain staff in the Google smart software units?

Stephen E Arnold, July 15, 2022

IBM Seeks to Avoid Groundhog Day in AI/ML

July 8, 2022

How do you deliver the killer AI/ML system? Via news releases and PR perhaps?

The Next Web claims that, “IBM’s Human-Centered Approach Is The Only Big Tech Blueprint AI Startups Should Follow.” Author Tristan Greene reminds readers that IBM’s initials stand for International Business Machines and he met the company’s first chief AI officer Seth Dobrin. Dobrin said IBM would never focus on consumer AI, i.e. virtual assistants and selfie apps.

Dobrin also stated that IBM’s goal is to create AI models that improve human life and provide value for its clients and partners. It is apparently not hard to do if you care about how individuals will be affected by monetized models. He compared these models to toys:

“During a discussion with the Financial Times’ Tim Bradshaw during the conference, Dobrin used the example of large-parameter models such as GPT-3 and DALL-E 2 as a way to describe IBM’s approach.

He described those models as “toys,” and for good reason: they’re fun to play with, but they’re ultimately not very useful. They’re prone to unpredictability in the form of nonsense, hate speech, and the potential to output private personal information. This makes them dangerous to deploy outside of laboratories.

However, Dobrin told Bradshaw and the audience that IBM was also working on a similar system. He referred to these agents as “foundational models,” meaning they can be used for multiple applications once developed and trained.”

IBM takes a human approach to its projects. Instead of feeding its AI datasets that could contain offensive information, IBM checks the data first before experimenting. That way the AI is already compliance ready and there will not be any bugs to work out later (at least the prejudice type). IBM is also focused on outcomes, not speculation, which is not how the tech giants work.

IBM wants to withstand an AI winter that could come after the fancy lights, parlor tricks, and flashy PR campaigns are in the past. Human-centered AI technologies, as Dobrin believes, will last longer and provide better services. IBM is also dedicated to sustainability.

IBM is green and wants to create better products and services before launch? It sounds better than most, but can they deliver?

Whitney Grace, July 8, 2022

How Do You Build Traffic for a Gray Lady Service?

June 29, 2022

If you are a semi traditional publisher, with a desire to be the newspaper for the Big Boys, how do you build traffic? The answer is: Get into online games.

Top 50 Most Popular News Websites in the World: Wordle Fuels Huge New York Times Traffic Growth” states:

The New York Times was the fastest growing top news sites in the world in May 2022, according to Press Gazette’s latest ranking of the 50 biggest English-language news websites in the world.

What other site experienced strong growth? Egames, online adult content with a math class, the Zuckbook?

Nope. I learned:

The fastest growing site overall among the whole top 50 list was again, Live Universal Awareness Map (liveuamap.com), which presents updates on conflicts in the form of a map (53.2m visits, up 1987%). Its huge surge is due to increased interest in its Ukraine coverage compared to the low base from which it started.

Are we looking at the future of traffic generation? A newspaper with an online game and a special operation which in some places should not be called a war. As an aside, it seems as if the newspaper should have been doing the live map info delivery. Obviously that type of data are not part of a newspaper’s mission.

Stephen E Arnold, June 29, 2022

Google Takes Bullets about Its Smart Software

June 23, 2022

Google continues it push to the top of the PR totem pole. “Google’s AI Isn’t Sentient, But It Is Biased and Terrible” is in some ways a quite surprising write up. The hostility seeps from the spaces between the words. Not since the Khashoggi diatribes have “real news” people been as focused on the shortcomings of the online ad giant.

The write up states:

But rather than focus on the various well-documented ways that algorithmic systems perpetuate bias and discrimination, the latest fixation for some in Silicon Valley has been the ominous and highly controversial idea that advanced language-based AI has achieved sentience.

I like the fact that the fixation is nested beneath the clumsy and embarrassing (and possibly actionable) termination of some of the smart software professionals.

The write up points out that the Google “distanced itself” from the assertion that Alphabet Google YouTube DeepMind’s (AGYT) is smart like a seven year old. (Aren’t crows supposed to be as smart as a seven year old?)

I noted this statement:

The ensuing debate on social media led several prominent AI researchers to criticize the ‘super intelligent AI’ discourse as intellectual hand-waving.

Yeah, but what does one expect from the outfit which wants to solve death? Quantum supremacy or “hand waving”?

The write up concludes:

Conversely, concerns over AI bias are very much grounded in real-world harms. Over the last few years, Google has fired multiple prominent AI ethics researchers after internal discord over the impacts of machine learning systems, including Gebru and Mitchell. So it makes sense that, to many AI experts, the discussion on spooky sentient chatbots feels masturbatory and overwrought—especially since it proves exactly what Gebru and her colleagues had tried to warn us about.

What do I make of this Google AI PR magnet?

Who said, “Any publicity is good publicity?” Was it Dr. Gebru? Dr. Jeff Dean? Dr. Ré?

Stephen E Arnold, June 23, 2022

Cyber Security: PowerPoints Are Easy. Cyber Security? Not So Much

June 21, 2022

I received a couple of cyber security, cyber threat, and cyber risk reports every week. What’s interesting is that each of the cyber security vendors mentioned in the news releases, articles, and blog posts discover something no other cyber outfit talks about. Curious.

I read “Most Security Product Buyers Aren’t Getting Promised Results: RSA Panel.” The article explains that other people poking around in security have noticed some oddities, if not unexplained cyber threats too.

The article reports:

Hubback [an expert from ISTARI] said that “90% of the people that I spoke to said that the security technologies they were buying from the market are just not delivering the effect that the vendors claim they can deliver. … Quite a shocking proportion of people are suffering from technology that doesn’t deliver.”

I found this factoid in the write up interesting:

…vendors know their product and its strengths and weaknesses, but buyers don’t have the time or information to understand all their options. “This information asymmetry is the classic market for lemons, as described by George Akerlof in 1970,” said Hubback. “A vendor knows a lot more about the quality of the product than the buyer so the vendor is not incentivized to bring high-quality products to market because buyers can’t properly evaluate what they’re buying.”

Exploitation of a customer’s ignorance and trust?

Net net: Is this encouraging bad actors?

Stephen E Arnold, June 21, 2022

Pi: Proving One Is Not Googley

June 17, 2022

I read “Google Sets New Record for Calculating Pi — But What’s the Point?” The idea for this story is Google’s announcement that it had calculated pi to 100 trillion digits or 1×10^14. My reaction to Google’s announcement is that it is similar to the estimable firm’s claim to quantum supremacy, its desire to solve death, and to make Google Glass the fungible token for Google X or whatever the money burner was called.

But the value of the article is to demonstrate that the publisher and the author are not Googley. One does not need a reason to perform what is a nifty high school science club project. Sure, there may be some alchemists, cryptographers, and math geeks who are into pi calculations. What if numbers do repeat? My goodness!

I think the other facet of the 100 trillion digits is to make clear that Google can burn computing resources; for example:

In total, the process used a whopping 515 TB of storage and 82 PB of I/O.

To sum up, the 100 trillion pi calculations make it easy [1] for the Google to demonstrate that you cannot kick the high school science club mentality even when one is allegedly an adult, and [2] identify people who would not be qualified to work at Google either as a full time equivalent, a contractor, or some other quasi Googley life form like an attorney or a marketing professional.

That’s the point?

Stephen E Arnold, June 17, 2022

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta