PR Professionals: Unethical?
January 28, 2022
Public relations campaigns shape the public’s perception. PR experts can flip a situation to make it negative or positive based on the desired outcome. Entrepreneur discussed how public relations campaigns challenge societal ethics and give a new meaning to Orwell’s doublethink: “Public Relations Bring Ethics Under The Spotlight.” PR experts have been accused for decades for shaping reality and the past few years have exploded with fake blogging, fake grassroots lobbying, and stealth marketing.
These nefarious PR tactics are only the tip of the iceberg, because controlling reality goes further with training spokespeople to remain silent in media interviews, monitoring their social media channels, and backtracking when necessitated. This goes against what the true purpose of PR:
“Monitoring and criticism from outside and inside the public relations industry keep a watch on the vast industry that public relations has become. This, in turn, makes practitioners and the industry responsive to what constitutes appropriate conduct. Ethical public relations should not aim merely to confuse or cause equivocation but should inform and honestly influence judgment based on good reasons that advance the community. A necessary precondition of professionalism is ethically defensible behavior. Such a framework derives from philosophical and religious attitudes to behavior and ethics, laws and regulations, corporate and industry codes of conduct, public relations association codes of ethics, professional values and ethics, training and personal integrity.”
Keeping ethics in the in PR practice appears to be a thing of the past, especially with the actions of many world governments before and after the COVID-19 pandemic hit.
There are three fundamental ethical practices: teleology, deontology, and Aristotle’s Golden Mean. Immanuel Kant is the founder of modern ethics and he developed a three step method to solve ethical dilemmas:
“1. When in doubt as to whether an act is moral or not, apply the categorical imperative, which is to ask the question: “What if everyone did this deed?”
2. Always treat all people as ends in themselves and never exploit other humans.
3. Always respect the dignity of human beings.”
PR experts are subject to the same demands as everyone else: they must make a living in order to survive. Unlike the average retail or office worker, they have skills that changes the public perception of an event, organization, or individual. PR experts usually respond to the demands of their clients, because the client is paying the bills. Saying no. Maybe not too popular at some firms?
Whitney Grace January 28, 2022
PR Dominance: NSO Group Vs Peloton
January 27, 2022
If you have followed the PR contrail behind the NSO Group, you probably know that the Israeli specialized software and services firm has become a household name at least among the policeware and intelware community. A recent example is reported in “Israel’s Attorney General Orders Probe of NSO Spyware Claims.” The write up explains:
Israel’s attorney general says he is launching an investigation into the police’s use of phone surveillance technology following reports that investigators tracked targets without proper authorization
Not good.
But there is a bright cloud on the horizon.
“Second TV Show Emerges With Peloton Twist As A Plot Point” asserts:
Already reeling from its announcement last week that it is halting production of its connected fitness products as demand wanes, Peloton must now face another tv show that seems to indicate its devices may cause issues for a certain segment of the population.
Translating the muffy-wuffy writing, the idea is that a character in a US tv show rides a Peloton, suffers a heart attack, and dies. The alleged panini-zation of small creatures under one model’s walking belt was a definite negative. But not even NSO Group is depicted knocking off the talent in a program. Keep in mind that two shows use the Peloton as an artistic device a twist on the deus ex machina from high school English class required reading of Greek tragedies.
Will Peloton continue its climb to the top of the PR leader board? My hunch is that NSO Group hope that it does.
Stephen E Arnold, January 27, 2022
Meta Zuck: AIR SC Sort of Sketched Out
January 25, 2022
I read Facebook’s (Meta’s) blog post called “Introducing the AI Research SuperCluster — Meta’s Cutting-Edge AI Supercomputer for AI Research.” The AIR SC states:
Today, Meta is announcing that we’ve designed and built the AI Research SuperCluster (RSC) — which we believe is among the fastest AI supercomputers running today and will be the fastest AI supercomputer in the world when it’s fully built out in mid-2022.
Then this statement:
Ultimately, the work done with RSC will pave the way toward building technologies for the next major computing platform — the metaverse, where AI-driven applications and products will play an important role.
So the AIR SC is sort of real. The applications for the AIR SC are sort of metaverse. That’s not here either in my opinion.
So what’s going on? Here are my thoughts:
- Facebook wants to stake out conceptual territory claims as AT&T did with its non 5G announcements about the under construction 5G capabilities.
- Facebook wants to show that its AIR SC is bigger, better, faster, and more super than anything from the Amazon, Google, or other quasi-monopolies who want systems that will dominate the super computer league table for now and possibly forever unless government regulators or user behavior changes the game plan.
- Facebook believes the Silicon Valley marketing mantra, “Fake it until you make it” with a possible change. I interpret the announcement to say, “Over promise and under deliver.” I admit I have become jaded with the antics of these corporate giants who have been able to operate without meaningful oversight or what some might call ethical guidelines for a couple of decades.
In the old days, companies in the Silicon Valley mode did vaporware. The tradition continues? Sure, why not? There’s even a TikTok style video to get the AIR SC message across.
Stephen E Arnold, January 25, 2022
Google Identifies Smart Software Trends
January 18, 2022
Straight away the marketing document “Google Research: Themes from 2021 and Beyond” is more than 8,000 words. Anyone familiar with Google’s outputs may have observed that Google prefers short, mostly ambiguous phraseology. Here’s an example from Google support:
Your account is disabled
If you’re redirected to this page, your Google Account has been disabled.
When a Google document is long, it must be important. Furthermore, when that Google document is allegedly authored by Dr. Jeff Dean, a long time Googler, you know it is important. Another clue is the list of contributors which includes 32 contributors helpfully alphabetized by the individual’s first name. Hey, those traditional bibliographic conventions are not useful. Chicago Manual of Style? Balderdash it seems.
Okay, long. Lots of authors. What are the trends? Based on my humanoid processes, it appears that the major points are:
TREND 1: Machine learning is cranking out “more capable, general purpose machine learning models.” The idea, it seems, that the days of hand-crafting a collection of numerical recipes, assembling and testing training data, training the model, fixing issues in the model, and then applying the model are either history or going to be history soon. Why’s this important? Cheaper, faster, and allegedly better machine learning deployment. What happens if the model is off a bit or drifts, no worries. Machine learning methods which make use of a handful of human overseers will fix up the issues quickly, maybe in real time.,
TREND 2: There is more efficiency improvements in the works. The idea is the more efficiency is better, faster, and logical. One can look at the achievements of smart software in autonomous automobiles to see the evidence of these efficiencies. Sure, there are minor issues because smart software is sometimes outputting a zero when a one is needed. What’s a highway fatality in the total number of safe miles driven? Efficiency also means it is smarter to obtain machine learning, ready to roll models and data sets from large efficient, high technology outfits. One source could be Google. No kidding? Google?
TREND 3: “Machine learning is becoming more personally and communally beneficial.” Yep, machine learning helps the community. Now is the “community” the individual who works on deep dives into Google’s approach to machine learning or a method that sails in a different direction. Is the community the advertisers who rely on Google to match in an intelligent and efficient manner the sales’ messages to users human and system communities? Is the communally beneficial group the users of Google’s ad supported services? The main point is that Google and machine learning are doing good and will do better going forward. This is a theme Google management expresses each time it has an opportunity to address a concern in a hearing about the company’s activities in a hearing in Washington, DC.
TREND 4: Machine learning is going to have “growing impact” on science, health, and sustainability. This is a very big trend. It implicitly asserts that smart software will improve “science.” In the midst of the Covid issue, humans appear to have stumbled. The trend is that humans won’t make such mistakes going forward; for example, Theranos-type exaggeration, CDC contradictory information, or Google and the allegations of collusion with Facebook. Smart software will make these examples shrink in number. That sounds good, very good.
TREND 5: A notable trend is that there will be a “deeper and broader understanding of machine learning.” Okay, who is going to understand? Google-certified machine learning professionals, advertising intermediaries, search engine optimization experts, consumers of free Google Web search, Google itself, or some other cohort? Will the use of off the shelf, pre packaged machine learning data sets and models make it more difficult to figure out what is behind the walls of a black box? Anyway, this trend sounds a suitable do good, technology will improve the world that appears to promise a bright, sunny day even though a weathered fisherperson says, “A storm is a-coming.”
The write up includes art, charts, graphs, and pictures. These are indeed Googley. Some are animated. Links to YouTube videos enliven the essay.
The content is interesting, but I noted several omissions:
- No reference to making making decisions which do not allegedly contravene one or more regulations or just look like really dicey decisions. Example: “Executives Personally Signed Off on Facebook-Google Ad Collusion Plot, States Claim”
- No reference to the use of machine learning to avoid what appear to be ill-conceived and possibly dumb personnel decisions within the Google smart software group. Example: “Google Fired a Leading AI Scientist but Now She’s Founded Her Own Firm”
- No reference to anti trust issues. Example: “India Hits Google with Antitrust Investigation over Alleged Abuse in News Aggregation.”
Marketing information is often disconnected from the reality in which a company operates. Nevertheless, it is clear that the number of words, the effort invested in whizzy diagrams, and the over-wrought rhetoric are different from Google’s business-as-usual-approach.
What’s up or what’s covered up? Perhaps I will learn in 2022 and beyond?
Stephen E Arnold, January 18, 2022
Business Intelligence: Popping Up a Level Pushes Search into the Background
January 17, 2022
I spotted a diagram in this Data Science Central article “Business Intelligence Analytics in One Picture.” The diagram takes business intelligence and describes it as an “umbrella term.” From my point of view, this popping up a conceptual label creates confusion. First, can anyone define “intelligence” as the word is used in computer sectors. Now how about “artificial intelligence,” “government intelligence,” or “business intelligence.” Each of these phrases is designed to sidestep the problem of explaining what functions are necessary to produce useful or higher value information.
Let’s take an example. Business intelligence suggests that information about a market, a competitor, a potential new hire, or a technology can be produced, obtained (fair means or foul means), or predicted (fancy math, synthetic data, etc.) The core idea is gaining an advantage. That is too crude for many professionals who are providers of business intelligence; for example, the mid tier consulting firms cranking out variations of General Eisenhower’s four square graph or a hyperbole cycle.
Business intelligence is a marketing confection. The graph identifies specific “components” of business intelligence. Some of the techniques necessary to obtain high value information are not included; for example, running a fake job posting designed to attract employees who currently work at the company one is subject to a business intelligence process, surveillance via mobile phones, sitting in a Starbucks watching and eavesdropping, or using analytic procedures to extract “secrets” from publicly available documents like patent applications, among others.
Business intelligence is not doing any of those things because they are [a] unethical, [b] illegal, [c] too expensive, or [d] difficult. The notion of “ethical behavior” is an interesting one. We have certain highly regarded companies taking actions which some in government agencies find improper. Nevertheless, the actions continue, not for a week or two but for decades. So maybe ethics applied to business intelligence is a non-starter. Nevertheless, certain research groups are quick to point out that unethical information gathering is not the dish served as conference luncheons.
Here are the elements or molecules of business intelligence:
- Data mining
- Data visualization
- Data preparation
- Data analytics
- Performance metrics / benchmarking
- Querying
- Reporting
- Statistical analysis
- Visual analysis
Data mining, data analytics, performance metrics / benchmarking, and statistical analysis strike me as one thing: Numerical procedures.
Now the list looks like this:
- Numerical procedures
- Data visualization
- Data preparation
- Querying
- Reporting
- Visual analysis
Let’s concatenate data visualization and visual analysis into one function: Producing charts and graphs.
Now the list looks like this:
- Producing charts and graphs
- Data preparation
- Numerical procedures
- Querying
- Reporting.
Querying, in this simplification, has moved from one of nine functions to one of five functions.
What’s up with business intelligence whipping up disciplines? Is the goal to make business intelligence more important? Is it a buzzword exercise so consultants can preach doom and sell snake oil? Is it a desire to add holiday lights and ornaments to distract people from what business intelligence is?
My hunch is that business intelligence professionals don’t want to use the words spying, surveillance, intercepts, eavesdrop, or operate like a nation state’s intelligence agency professionals.
One approach is business intelligence which seems to mean good, mathy, and valuable. The spy approach is bad and could lead to an on one Lifetime Report Card.
The fact is that one of the most important components of any intelligence operation is asking the right question. Without querying, masses of data, statistics software, and online experts with MBAs would not be able to find an online ad using Google.
Net net: The chart makes spying and surveillance into a math-centric operation. The chart fails to provide a hierarchy based on asking the right question. Will the diagram help sell business intelligence consulting and services? The scary answer is, “Absolutely.”
Stephen E Arnold, January 14, 2022
Microsoft: Putting Teeth on Edge
January 11, 2022
Usually a basic press release for an update to Microsoft receives little discussion, but OS News recently posted a small quip: “Update For Windows 10 And 11 Blocks Default Browser Redirect, But There Is a Workaround” and users left testy comments. The sting fighting words were:
“It seems that Microsoft has quietly backported the block, introduced a month ago in a Dev build of Windows 11, on tools like EdgeDeflector and browsers from being the true default browser in Windows 10, with the change being implemented in Windows 11 too. Starting from KB5008212, which was installed on all supported versions of Windows 10 yesterday with Patch Tuesday, it is no longer possible to select EdgeDeflector as the default MICROSOFT-EDGE protocol.”
Followed by this sarcastic line: “They spent engineering resources on this.”
Users were upset because it meant Microsoft blocked other Web browsers from becoming a system’s default. It is a corporate strategy to normalize anti-competitive restrictions, but there are users who defended Microsoft’s move. They stated that blocking other Web browsers protected vulnerable users, like the elderly, from accidentally downloading malware and adware.
The comments then turned into an argument between tech-savvy experts and the regular users who do not know jack about technology. The discussion ended with semi-agreement that users need protection from freeware that forcefully changes a system, but ultimately users have the choice on their system settings.
In the end, the comments shifted to why Microsoft wants Edge to be the system default: money and deflecting attention from its interesting approaches to security.
Whitney Grace, January 11, 2022
Windows 11: Loved and Wanted? Sure As Long As No One Thinks about MSFT Security Challenges
January 10, 2022
I hold the opinion that the release of Windows 11 was a red herring. How does one get the tech pundits, podcasters, and bloggers to write about something other than SolarWinds, Exchange, etc.? The answer from my point of view was to release the mostly odd Windows 10 refresh.
Few in my circle agreed with me. One of my team installed Windows 11 on one of our machines and exclaimed, “I’m feeling it.” Okay, I’m not. No Android app support, round corners, and like it, dude, you must use Google Chrome, err, I mean Credge.
I read “Only 0.21%, Almost No One Wants to Upgrade Windows 11.” Sure, the headline is confusing, but let’s look at the data. I believe everything backed by statistical procedures practiced by an art history major whose previous work experience includes taking orders at Five Guys.
The write up states:
According to the latest research by IT asset management company Lansweeper, although Windows 10 users can update Windows 11 for free, it is currently only 0.21%. Of PC users are running Windows 11.
I am not sure what this follow on construction means:
At present, Windows 11 is very good. Probably the operating system with the least proportion.
I think the idea is that people are not turning cartwheels over Windows 11. Wasn’t Windows 10 supposed to be the last version of Windows?
I am going to stick with my hypothesis that Windows 11 was pushed out the door, surprising Windows experts with allegedly “insider knowledge” about what Microsoft was going to do. The objective was to deflect attention from Microsoft’s significant security challenges.
Those challenges have been made a little more significant with Bleeping Computer’s report “Microsoft Code Sign Check Bypassed to Drop Zloader.”
Is it time for Windows 12, removing Paint, and charging extra for Notepad?
Possibly.
Stephen E Arnold, January 10, 2022
Perhaps Someone Wants to Work at Google?
January 7, 2022
I read another quantum supremacy rah rah story. What’s quantum supremacy? IBM and others want it whatever it may be. “Google’s Time Crystals Could Be the Greatest Scientific Achievement of Our Lifetimes” slithers away from the genome thing, whatever the Nobel committee found interesting, and dark horses like the NSO Group’s innovation for seizing an iPhone user’s mobile device just by sending the target a message.
None of these is in the running. What we have it, according to The Next Web, is what may be:
the world’s first time crystal inside a quantum computer.
Now the quantum computer is definitely a Gartner go-to technology magnet. Google is happy with DeepMind’s modest financial burn rate to reign supreme. The Next Web outfit is doing its part. Two questions?
What’s a quantum computer? A demo, something that DARPA finds worthy of supporting, or a financial opportunity for clever physicists and assorted engineers eager to become the Seymour Crays of 2022.
What’s a time crystal? Frankly I have no clue. Like some hip phrases — synaptic plasticity, phubbing, and vibrating carbon nanohorns, for instance — time crystal is definitely evocative. The write up says:
Time crystals don’t give a damn what Newton or anyone else thinks. They’re lawbreakers and heart takers. They can, theoretically, maintain entropy even when they’re used in a process.
The write up includes a number of disclaimers, but the purpose of the time crystal strikes me as part of the Google big PR picture. Whether time crystals are a thing like yeeting alphabet boys or hyperedge replacement graph grammars, the intriguing linkage of Google, quantum computing, and zippy time crystals further cements the idea that Google is a hot bed of scientific research, development, and innovation.
My thought is that Google is better at getting article writers to make their desire to work at Google evident. Google has not quite mastered the Timnit Gebru problem, however.
And are the Google results reproducible? Yeah, sure.
Stephen E Arnold, January 7, 2022
A New Spin on Tech Recruitment
January 7, 2022
“Knock Knock! Who’s There? – An NSA VM” is an interesting essay for three reasons.
First, it contains a revealing statement about the NSO Group:
Significant time has passed and everyone went crazy last week with the beautiful NSO exploit VM published by Project Zero, so why not ride the wave and present a simple NSA BPF VM. It is still an interesting work and you have to admire the great engineering that goes behind this code. It’s not everyday that you can take a peek at code developed by a well funded state actor.
I noticed that the write up specifically identifies the NSO Group as a “state actor.” I think this means that NSO Group was working for a country, not the customers. This point is one that has not poked through the numerous write ups about the Israel-based company.
Second, the write up walks through a method associated with the National Security Agency. In terms of technical usefulness, one could debate whether the write up contains old news or new news. The information does make it clear that there are ideas for silent penetration of targeted systems. The targets are not specific mobile phones. It appears that the targets of the methods referenced and the sample code provided are systems higher in the food chain.
Third, the write up is actually a recruitment tool. This is not novel, but it is probably going to lead to more “look how smart and clever we are, come join us” blandishments in the near future. My hunch is that some individual, eager to up their games, will emulate the approach.
Is this method of sharing information a positive or negative? That depends on whom one asks, doesn’t it?
Stephen E Arnold, January 7, 2022
Datasets: An Analysis Which Tap Dances around Some Consequences
December 22, 2021
I read “3 Big Problems with Datasets in AI and Machine Learning.” The arguments presented support the SAIL, Snorkel, and Google type approach to building datasets. I have addressed some of my thoughts about configuring once and letting fancy math do the heavy lifting going forward. This is probably not the intended purpose of the Venture Beat write up. My hunch is that pointing out other people’s problems frames the SAIL, Snorkel, and Google type approaches. No one asks, “What happens if the SAIL, Snorkel, and Google type approaches don’t work or have some interesting downstream consequences?” Why bother?
Here are the problems as presented by the cited article:
- The Training Dilemma. The write up says: “History is filled with examples of the consequences of deploying models trained using flawed datasets.” That’s correct. The challenge is that creating and validating a training set for a discipline, topic, or “space” is that new content arrives using new lingo and even metaphors instead of words like “rock.” Building a dataset and doing what informed people from the early days of Autonomy’s neuro-linguistic method know is that no one wants to spend money, time, and computing resources in endless Sisyphean work. That rock keeps rolling back down the hill. This is a deal breaker, so considerable efforts has been expended figuring out how to cut corners, use good enough data, set loose shoes thresholds, and rely on normalization to smooth out the acne scars. Thus, we are in an era of using what’s available. Make it work or become a content creator on TikTok.
- Issues with Labeling. I don’t like it when the word “indexing” is replaced with works like labels, metatags, hashtags, and semantic sign posts. Give me a break. Automatic indexing is more consistent than human indexers who get tired and fall back on a quiver of terms because who wants to work too hard at a boring job for many. But the automatic systems are in the same “good enough” basket as smart training data set creation. The problem is words and humans. Software is clueless when it comes to snide remarks, cynicism, certain types of fake news and bogus research reports in peer reviewed journals, etc. Indexing using esoteric words means the Average Joe and Janet can’t find the content. Indexing with everyday words means that search results work great for pizza near me but no so well for beatles diet when I want food insects eat, not what kept George thin. The write up says: “Still other methods aim to replace real-world data with partially or entirely synthetic data — although the jury’s out on whether models trained on synthetic data can match the accuracy of their real-world-data counterparts.” Yep, let’s make up stuff.
- A Benchmarking Problem. The write up asserts: “SOTA benchmarking [also] does not encourage scientists to develop a nuanced understanding of the concrete challenges presented by their task in the real world, and instead can encourage tunnel vision on increasing scores. The requirement to achieve SOTA constrains the creation of novel algorithms or algorithms which can solve real-world problems.” Got that. My view is that validating data is a bridge too far for anyone except a graduate student working for a professor with grant money. But why benchmark when one can go snorkeling? The reality is that datasets are in most cases flawed but no one knows how flawed. Just use them and let the results light the path forward. Cheap and sounds good when couched in jargon.
What’s the fix? The fix is what I call the SAIL, Snorkel, and Google type solution. (Yep, Facebook digs in this sandbox too.)
My take is easily expressed just not popular. Too bad.
- Do the work to create and validate a training set. Rely on subject matter experts to check outputs and when the outputs drift, hit the brakes, and recalibrate and retrain.
- Admit that outputs are likely to be incomplete, misleading, or just plain wrong. Knock of the good enough approach to information.
- Return to methods which require thresholds to be be validated by user feedback and output validity. Letting cheap and fast methods decide which secondary school teacher gets fired strikes me as not too helpful.
- Make sure analyses of solutions don’t functions as advertisements for the world’s largest online ad outfit.
Stephen E Arnold, December 22, 2021