Great Moments in Management: Rolfing and AmaZen

May 31, 2021

Happy holiday, everyone. I spotted two fascinating examples of message control and management this morning. The first is the Daily Dot “real” news about an Amazon driver getting into Rolfing. “Weekend Update: People Feel Sorry for Amazon Driver Caught Screaming from Truck” reports:

One of the TikTok videos in question features an Amazon driver screaming at the top of his lungs as he makes his way down the street in the delivery truck. His apparent distress while on the clock has sympathetic viewers calling for higher wages and better working conditions for all Amazon employees.

Yep, TikTok. Who can believe that content engine? Probably some of the deep thinkers who absorb information in 30 second chunks. I think of TikTok as a “cept ejector.” Yes, like Rolfing, “cepts” were a thing years ago. Rusty on Rolfing? Check out this link.

The second mesmerizer is described in “Introducing the “AmaZen” Booth, a Box Designed for Convenient, On-Site Worker Breakdowns.” This coffin sized object looks like a coffin. The story reports that an Amazon wizard said:

“With AmaZen, I wanted to create a space that’s quiet, that people could go and focus on their mental and emotional well-being,” Brown explains over footage of an employee entering what looks like a porta potty decorated with pamphlets, a computer, some sad little plants, and a tiny fan. She continues, calling the overgrown iron maiden a place to “recharge the internal battery” by checking out “a library of mental health and mindful practices.”

What do screaming employees and a work environment requiring a coffin sized Zen booth suggest? Many interesting things I wager.

Management excellence in action. Is this something a high school science club might set up as a prank?

Stephen E Arnold, May 31, 2021

Endeca: In the News Again. Remarkable

May 31, 2021

Endeca is the outfit which was among the first of the search vendors pushing the concept of “facets” and “guided navigation.” The technology dates from 1999. The company was interesting because it used some fancy marketing concepts to paper over the manual effort required to get the system to group content and display classifications; for example, provide an Endeca system with articles about Beaujolais and the system would put the content in the “wine” category. Believe me, people loved the idea that the system could index words and concepts. And the human part? Yeah, after signing the deal, there was among some customers more appreciation for the human word and the computational load the system imposed on computing resources. Like most of the search systems of that era, the company ended up selling itself to Oracle. Oracle had an appetite for search technology; for example, Applied Linguistics, Triple Hop, and RightNow (also acquired in 2011 when search was “hot”), among others.

Now Oracle Endeca is back in the news. Frankly I was surprised to read “Oracle Boasted That Its Software Was Used against US Protesters. Then It Took the Tech to China.” My first question was, “When did this alleged taking “tech to China occur?” The answer right after Oracle bought Endeca in 2011. Why was Endeca for sale? Not germane to the write up. I think the answer to this question is; specifically, Endeca hit a revenue glass ceiling. The Endeca method (disclosed in part in USpat7035864 filed in year 2000) required some technical cartwheels apart from the MBA consulting work. Here’s an example of the “work” required to crank out useful facets:

endeca patent 2

The computational hoo-hah is one reason Endeca chased and caught some cash from Intel. The idea was to use Intel’s whiz bang multi core chips to increase the content processing speed. New MBAs and subject matter experts were available, but chips and Intel super tech. Wowza!

Wrong.

The issue with Endeca’s method is suggested in this statement from the article I was surprised to read:

At the peak of the NATO protests, police reportedly used Endeca to process 20,000 tweets an hour.

Okay, 20,000. How many tweets were flying around in 2011? According to a Twitter blog post, in 2013 the volume of tweets was 500 million per day which works out to about 5,700 per second. Knock these number down by 20 percent, and you still get the idea of the gulf between the tweet flow of around 250,000 per minute. Throughput? Yeah, let’s talk about how much actionable information can be derived for a real time event when the processing will have a tough time catching up with the protest.

Rules based, ageing technology which computationally intensive and pivots on human data massaging is not going to do the job for enterprise search, policeware, or intelware applications. A small ecommerce site selling wine? Perfect. The Twitter fire hose or a more challenging task like E2EE messaging? Highly unlikely.

There were more promising solutions, and what’s interesting is that Oracle invested in one of them. You will have to do some work to discern the connection between Oracle’s Irish investment operation and a company allegedly headquartered in Manchester Square in London, but the links are there. That’s a more interesting China-Oracle connection, and it one more relevant to monitoring the actions of companies than the Endeca deal.

By the way, on Oracle’s watch Endeca became sort of a market intelligence and ecommerce offering, not a stellar tool for the often questioned In-Q-Tel operation.

The write up ends with this quote attributed to a wizard:

“It still boggles my mind.”

What boggles my mind is that Endeca is not a particular timely product. Even more baffling is how the write up missed other, more significant Oracle China connections. Maybe a “real” journalist will visit Manchester Square and check out what companies do business from that location. One of them might be Oracle maybe?

Why did Oracle pitch the Endeca tech to China? The company was trying to generate a sustainable, high dollar return from this horse in the Oracle search and content processing corral. Like RightNow, some of those horses do not look like potential Kentucky Derby winners.

Stephen E Arnold, May 31, 2021

Loon Balloon Descends from Fantastic Heights to Parking Lot

May 31, 2021

I read “Alphabet Moonshot Loon Is Jolted by Layoffs – But Employees Are Finding Jobs with Tech Titan.” The article is amusing because of the assertion that “employees are finding jobs with tech titan.” How many Loonies will be joining other units of the online advertising company? The article does not answer the question because that would reveal the functioning of the people management methods at the firm.

I noted this statement in the write up:

Mountain View-based Loon decided to wind down the company after it wasn’t able to craft a viable and sustainable business model.

Yep, the online ad firm learned that balloons floated with the wind. Bad weather? It happens, and the Loon balloons would go where Google did not want them to venture. Flight paths, military facilities, transmission lines. You get the idea.

Here’s a statement which may ring hollow with Timnit Gebru:

Loon and Alphabet intend to help the displaced workers, a Loon spokesperson said in an email to this news organization.

Help, it appears has not been defined. A member of the high school science club management team allegedly said:

If Loon employees do not find alternative roles at Alphabet, they will be eligible to receive severance pay following their end dates.

Loon balloons come down to earth, and it is possible that some Xooglers will be able to park their vans in the parking lot instead of on the street in Mountain View. Cost cutting and reality have intruded on the mom and pop online ad merchant it seems.

Stephen E Arnold, May 31, 2021

Making Life Easier for Professional Publishers: A Call for More Blatant Fraud

May 31, 2021

I enjoyed “Please Commit More Blatant Academic Fraud.” The intent is to highlight the disgusting underbelly of academic underbellies of naked mole rats. The author picks up on the fraudulent peer cheerleading for research related to artificial intelligence, but when tenure is at stake, I wager that professors teaching ethics can be manipulation minded as well. It just depends upon how one frames the argument, right?

The essay has a very interesting quote; to wit:

It would, of course, be quite difficult to actually distinguish the papers published fraudulently from the those published “legitimately”. (That fact alone tells you all you really need to know about the current state of AI research.)

I want to add a slightly different quantum entanglement to the nuclear nature of the academic fraud issue. The professional publishers must be considered. These are the outstanding executives who often publish research known to be wonky. The professional publishers create journals filled with hocus pocus, wrapped in the magic of peer reviewing, and totted up to be the beacons of “real” information.

If anyone wants more and crazier research written by authors and institutions willing to pay assorted fees to get their estimable contributions to knowledge published, it is the publishers. When an author makes a change, the outstanding professional publishers often charge to fix up a passage. Want reprints? Just get out that electronic payment system. Order away.

The professional publishers are struggling to get libraries to buy, subscribe, license, and renew automatically if possible. More junk research and increased content manipulation will improve the professional publishing system itself.

Imagine. Bogus research in medicine, social science, and quantum computing. When something actually reproducible and substantive becomes available, a researcher will have to spend more time on for fee commercial databases, apply more research assistant labor, and scan more tweets to figure out what’s “real” and what’s fake.

The advancement of knowledge is enabled, and even the professional publishers can get behind the call for action expressed in “Please Commit More Blatant Academic Fraud.” Marketing is more important for everyone it seems now.

Stephen E Arnold, May 31, 2021

And about That Windows 10 Telemetry?

May 28, 2021

The article “How to Disable Telemetry and Data Collection in Windows 10” reveals an important fact. Most Windows telemetry is turned on by default. But the write up does not explain what analyses occur for data on the company’s cloud services or for the Outlook email program. I find this amusing, but Microsoft — despite the SolarWinds and Exchange Server missteps — is perceived as the good outfit among the collection of ethical exemplars of US big technology firms.

I read “Three Years Until We’re in Orwell’s 1984 AI Surveillance Panopticon, Warns Microsoft Boss.” Do the sentiments presented as those allegedly representing the actual factual views of the Microsoft executive Brad Smith reference the Windows 10 telemetry and data collection article mentioned above? Keep in mind that Mr. Smith believed at one time than 1,000 bad actors went after Microsoft and created the minor security lapses which affected a few minor US government agencies and sparked the low profile US law enforcement entities into pre-emptive action on third party computers to help address certain persistent threats.

I chortled when I read this passage:

Brad Smith warns the science fiction of a government knowing where we are at all times, and even what we’re feeling, is becoming reality in parts of the world. Smith says it’s “difficult to catch up” with ever-advancing AI, which was revealed is being used to scan prisoners’ emotions in China.

Now about the Microsoft telemetry and other interesting processes? What about the emotions of a Windows 10 user when the printer does not work after an update? Yeah.

Stephen E Arnold, May 28, 2021

Surveillance: Looking Forward

May 28, 2021

I read “The Future of Communication Surveillance: Moving Beyond Lexicons.” The article explains that word lists and indexing are not enough. (There’s no mention of non text objects and icons with specific meanings upon which bad actors agree before including them in a text message.)

I noted this passage:

Advanced technology such as artificial intelligence (AI), machine learning (ML) and pre-trained models can better detect misconduct and pinpoint the types of risk that a business cares about. AI and ML should work alongside metadata filtering and lexicon alerting to remove irrelevant data and classify communications.

This sounds like cheerleading. The Snowden dump of classified material makes clear that smart software was on the radar of the individuals creating the information released to journalists. Subsequent announcements from policeware and intelware vendors have included references to artificial intelligence and its progeny as a routine component. It’s been years since the assertions in the Snowden documents became known and yet shipping cyber security solutions are not delivering.

The article includes this statement about AI:

Automatically learn over time by taking input from the team’s review of prior alerts

And what about this one? AI can

Adapt quickly to changing language to identify phrases you didn’t know you needed to look for

What the SolarWinds’ misstep revealed was:

  1. None of the smart cyber security systems noticed the incursion
  2. None of the smart real time monitoring systems detected repeated code changes and downstream malware within the compromised system
  3. None of the threat alert services sent a warning to users of compromised systems.

Yet we get this write up about the future of surveillance?

Incredible and disconnected from the real life performance of cyber security vendors’ systems.

Stephen E Arnold, May 28, 2021

WhatsApp Slightly Less Onerous on Privacy Policy

May 28, 2021

WhatsApp, which Facebook acquired in 2014, issued a new privacy policy that gives its parent company control over user data. This is a problem for those who value their privacy. Originally, users had to accept the policy by May 15 or be booted off the service. After facing backlash, however, the company has decided on a slightly less heavy-handed approach, at least in India. The Android Police reports, “WhatsApp Will Progressively Kill Features Until Users Accept New Privacy Policy.” Writer Prahsam Parikh reveals:

“The Press Trust of India reported that the Facebook-owned messaging service won’t delete accounts of those individuals who do not accept the new privacy policy on May 15. However, the same source also confirms that users will be sent reminders about accepting over the next ‘several weeks.’ And in a statement given to Android Central, WhatsApp has confirmed that while it won’t terminate accounts immediately, users who don’t accept the new terms will have only ‘limited account functionality’ available to them until they do. In the short term, that means losing access to your chat list, but you will still be able to see and respond to notifications as well as answer voice and video calls. However, after a few weeks of that, WhatsApp will then switch off all incoming notifications and calls for your account, effectively rendering it useless. The decision not to fully enforce the deadline seems to be in reaction to the stern stance that the Ministry of Electronics and Information Technology (MEITY) in India took against the company. Earlier this year, the ministry filed a counter-affidavit in the high court to prevent WhatsApp from going ahead with the privacy policy update.”

Wow, Facebook really wants that data. We think Facebook will have to relax its “new” rules in order to prevent Signal, Telegram, and Threema from capturing disaffected WhatsApp users.

Cynthia Murrell, May 28, 2021

More about Bert: Will TikTok Videos Be Next?

May 28, 2021

Google asserts its new AI model will deliver significant improvements. SEO Hacker discusses “Google MUM: New Search Technology.” We are told MUM, or Multi Unified Model, is like BERT but much more powerful. We learn:

“They are built on the same Transformer architecture, but MUM is 1000x more powerful than its predecessor. … Another difference between MUM and BERT is that MUM is trained across 75 languages – not just one language (usually English). This enables the search engine, through the use of MUM, to connect information from all around the world without going through language barriers. Additionally, Google mentioned that MUM is multimodal, so it understands and processes information from modalities such as text and images. They also brought up the possibility for MUM to expand to other modalities such as videos and audio files.”

For an example of how the new model will work, see either the SEO Hacker write-up or Google’s blog post on the subject. The illustration involves Mt. Fuji. Naturally, the Search Engine Optimization site ponders how the change might affect SEO. Writer Sean Si predicts MUM’s understanding of 75 languages means non-English content will find much wider audiences. The revised algorithm will also serve up more types of content, like podcasts and videos, alongside text-based resources. Both of those sound like positives, at least for searchers. Other ramifications on the field remain to be seen, but Si anticipates SEO pros will have to develop entirely new approaches. Of course, producing quality content relevant to one’s site should remain the top recommendation.

Cynthia Murrell, May 28, 2021

Google and Local Laws: Compliance As a Hedge Against Uncontrollable Costs

May 27, 2021

I read “Google CEO Sundar Pichai on New Social Media Rules: Committed to Comply With Local Laws, Work Constructively.” At first, I thought, “Google is waking up and smelling the La Colombe Corsica Dark Roast.” Then I considered this statement in the write up:

“So, we fully expect governments rightfully to both scrutinize and adopt regulatory frameworks. Be it Europe with copyright directive or India with information regulation etc, we see it as a natural part of societies figuring out how to govern and adapt themselves in this technology-intensive world,” he said, adding that Google engages constructively with regulators around the world, and participates in these processes.

Sounds good but is this the beginning of a Google for the fracture-net?

Also, Google’s enthusiasm for conforming is a recent development. Google wanted to make the world a better place—once. A decade ago, Google seemed to suggest that China had to change the behavior of its government. That appears to have triggered a distancing of Google from China. Then the Dragonfly, China specific search system came and possibly went.

With regulators in a number of countries taking action to deal with US technology companies which prefer to break things and apologize after the fact, Google is adapting.

Why?

First, the cost of being Google is high and those costs are quite hard to control.

Second, Google’s grip on personal data and online advertising revenue is weakening with age. Amazon is in the game, and I have heard that product search remains Amazon’s go to horse for the Madison Avenue derby.

Third, Google has become Google because there has been [a] zero recognition of what the company does and [b] the thrill of Googling has blunted interest in regulating the company.

The same can be said of other US technology giants.

This article about the new Google is less about Google wanting to follow local laws and more about what Google has to do to maintain its revenue streams.

The costs of being Google are high in business and financial terms. The enthusiasm for going local is more about getting into certain markets and keeping the data and money flowing into Google. A failure to do this means that Google’s costs will become an interesting challenge for the high school science club’s management methods.

Stephen E Arnold, May 27, 2021

The Rigors of College: Carpal Tunnel Thumb

May 27, 2021

I don’t think I would be able to graduate from a university today. I am not equipped with the wetware necessary to perform in the current academic environment. Okay, I skipped some grades. I have degrees. I am one of those individuals who prefer paper, pencils, notebooks, and motivated specialists to present information in person. Then I like to ask questions and participate in study groups with fellow students. I am not into Zoom and video games.

College Credit for Playing video Games? At Some California Campuses, It’s Happening” reminded me that I am not shaped for today’s digital world. I noted this passage:

“Higher ed needs to evolve or die,” said Dina Ibrahim, the academic advisor of the SF State esports athletic club and a professor of broadcast journalism. “We need to be teaching students relevant skills, that’s going to get them jobs in a rapidly changing landscape.”

That landscape means that esports, games, and unconferences are hot.

The article points out:

Those skills could help students land their first media jobs, said Mark “Garvey” Candella, director of student and education programs for Twitch, a $15 billion company that draws 30 million, mostly younger, visitors to its website daily. Amazon Inc. bought Twitch in 2014 for $970 million. The company makes money by showing ads to viewers, selling subscriptions, and taking a cut of any money viewers donate to streamers. “All the skills that you’re learning and using while you participate in gaming and esports are highly transferable and valuable skills in emerging new and digital media…”

No doubt.

Math, science, language study, and logic — No jobs in those field I assume.

I will stay home and wait for government checks. I will earn an F in egames and avoid carpal tunnel wrist braces.

Stephen E Arnold, May 27, 2021

Next Page »

  • Archives

  • Recent Posts

  • Meta