A Business Case for Search in the Time of Covid and the SolarWinds Misstep

February 8, 2021

Why does one working in an organization have to make a case for enterprise search? Oh, right, I forgot. Enterprise search has a rich history: Fast Search & Transfer with jail time for the founder, Autonomy with a sentencing date looming for the founder, Entopia with financial pain for its investors, and, well, the list of issues with enterprise search can be extended with references to IBM OmniSphere or STAIRS III, Delphes, Siderean, Arikus, Attensity, Brainware, Eegi, Relegence, Hakia, and the memorable Zaizi, among others.

Making the Business Case for Enterprise Search” is sponsored. That means it is an advertisement, marketing collateral, and hoo hah. But what is its message. I noted this passage:

Knowledge-centric organizations know that tools such as intelligent search are critical for cutting through the noise and making relevant information discoverable. However, many executives don’t prioritize these types of tools.

Yep, and there is a reason. Consider that Elasticsearch is open source. Amazon offers search and is educating the enthusiastic for free. Put these successes against the backdrop of Google’s high profile failure: The GSA or Google Search Appliance, a fine product according to some Google engineers.

Regardless of today, large organizations typically have multiple information retrieval systems. The idea of federating the information is a really good one until the bean counters realize that the staff, professional for fee services, and the time required to figure out access controls, file formats, and how to cope with versions, rich media, trade secrets in engineering drawings and chemical formulas, and index latency cost more money than anyone revealed in a marketing pitch.

The write up notes:

In a recent survey, nearly half of all respondents said it was challenging finding the right information when they needed it.

One question: What’s right? The problem with enterprise search is that it is a fake discipline trying to gain traction in a world of business intelligence, analytics, and real time data capture, analysis, and outputs.

I laughed at the reminder “Don’t neglect security.” This is the era of the SolarWinds’ misstep. Security is underfunded in most organizations. Do responsible Boards of Directors and senior executives need to be reminded that their security systems is now Job Number One.

Enterprise search? Yeah, a hot enterprise solution. Just a solution which has become a utility and a free one via open source software at that.

Stephen E Arnold, February 8, 2021

8 Complexity Analysis Underscores a Fallacy in the Value of Mindless Analysis of Big Data

February 8, 2021

First, I want mention that in the last two days I have read essays which are like PowerPoint slide shows with animation and written text. Is this a new form of “writing”?

Now to the business of the essay and its mini movies: “What Is Complexity Science?” provides a run down of the different types of complexity which academics, big thinkers, number nerds, and wizard-type people have identified.

If you are not familiar with the buzzwords and how each type of complexity generates behaviors which are tough to predict in real life, read the paper which is on Microsoft Github.

Here’s the list:

  1. Interactions or jujujajaki networks. Think of a graph of social networks evolving in real time.
  2. Emergence. Stuff just happens when other stuff interact. Rioting crowds or social media memes.
  3. Dynamics. Think back to the pendulum your high school physics teacher tried to explain and got wrong.
  4. Forest fires. Visualize the LA wildfires.
  5. Adaptation. Remember your friend from college who went to prison. When he was released and hit the college reunion, he had not yet adjusted to life outside: Hunched, stood back to wall, put his left arm around his food, weird eye contact, etc.

The write up explains that figuring out what’s happening is difficult. Hence, mathematics. You know. Unreasonably effective at outputting useful results. (How about that 70 to 90 percent accuracy. Close enough for horse shoes? Except when the prediction is wrong. (Who has heard, “Sorry about the downside of chemotherapy, Ms. Smith. Your treatment failed and our data suggest it works in most cases.”)

Three observations:

  • Complexity is like thinking and manipulating infinity. Georg Cantor illustrates what can happen when pondering the infinite.
  • Predictive methods make a stab at making sense out of something which may be full of surprises. What’s important is not the 65 to 85 percent accuracy. The big deal is the 35 to 15 percent which remains — well — unpredictable due to complexity.
  • Humans want certainty, acceptable risk, and possibly change on quite specific terms. Hope springs eternal for mathematicians who deliver information supporting this human need.

Complicated stuff complexity. Math works until it doesn’t. But now we have a Ramanujam Machine which can generate conjectures. Simple, right?

Stephen E Arnold, February 8, 2021

US Department of Defense: Procurement Methods Zapped by JEDI

February 5, 2021

I don’t know if the information in this article is 100 percent accurate, but it is an entertaining read. Navigate to “Pentagon May Cancel JEDI Contract and Start Over.” The write up does not mention the SolarWinds’ misstep, but I have heard that some DoD work from home professionals are getting a bit of a tan. Solar radiation can be a problem. The write up states:

The Pentagon could be set to cancel the $10 billion Joint Enterprise Defense Infrastructure (JEDI) contract it awarded to Microsoft in 2019, as a legal battle with Amazon rages on. The cancellation, should it occur, could provide significant financial benefits for AWS, with the cloud provider ready to swoop in. A new memo has revealed the extent of the Pentagon’s frustration with the legal wrangling. In particular, the memo states that, should Amazon’s complaint be upheld, the entire JEDI contract may be abandoned.

Her are the operative words:

$10 billion

Legal battle

Microsoft

Amazon

JEDI

and the biggie frustration.

Amazon arrives at the party without a tan from the SolarWinds. Microsoft may have been singed or hit with some first degree burns. Oracle is a wild card because it may find a way to provide a very competitive option.

Where is the DoD now? Snagged in Covid, wrestling with leadership, adapting to the new administration, working the numbers for the remarkable F 35 alongside figures for A10s and F 15 enhanced models, and the drone of social media and talk about thousands of nano drones descending on a squad in some delightful camping areas.

If the information in the write up is accurate, perhaps a connection with the SolarWinds’ misstep may surface. But for now, its legal hassles and the thrill of many silos of systems.

Stephen E Arnold, February 5, 2021

Reclaiming Reality: Struggling with the What The World Actually Is

February 5, 2021

I was amused by the odd ball headline “Reclaiming Reality From Chaos” which contained not one business story in the Business section of the estimable New York Times but two. What was it that the odd duck at General Motors said? Oh, right, something like two objectives is no objective. Two separate business stories under one headline makes clear that “chaos” is alive and quite healthy in the Gray Lady’s printy and digitally world.

The segment which caught my attention is “How the Biden Administration Can Help Wean Americans from the Scourge of Hoaxes and Lies.” (I will not bring up the scourge of veracity issues which have clung to the Gray Lady’s rayon garb with the tenacity of fur shed from my French bulldog and my polyester track jacket. That lovable dog’s fur is “sticky.” Good for social media: Not so good for fashionistas.

The write up consists of suggestions in the business news section of the newspaper for the Biden administration. The sources of these suggestions seem to be academia, non governmental organizations, and one cyber centric outfit in the UK. (The UK? Definitely on top of the US thing because Merrie Olde Angleland has done a bang up job with civil unrest, the California Royals, and the extremely well executed Brexit thing.)

What are the suggestions? Here’s my understanding of the principal outputs of “real business news” research into America’s national reality crisis. I am tempted to bring up that old chestnut relished by philosophy professors asking 18 and 19 year olds to define “reality.” I won’t. Reality is an issue which has reduced some thought wizards to insanity. Obviously this is not the Gray Lady’s crossword. To the suggestions:

  1. Create new labels for malefactors. Don’t call the Proud Boys terrorists like the Canadians. Get a new hashtag.
  2. Create a reality czar. When I read this idea, I thought of Frank Zarb, the “energy czar” appointed by President Gerald Ford. Mr. Zarb was okay, but his junior czar was a piece of work. And I will not bring up, “What is reality?” again.
  3. Vet the big boys’ algorithms. I wonder if the Facebook and Google type outfits will talk or print out the code and allow US government contractors to dig through the information. I wonder if Deloitte or Booz, Allen would get this job. My next question, “Where will these firms find the contractors to do the code work?” Most MBAs are not able to make sense of trivial algorithms like those crafted by the Facebooks and Googles of the world in the last few years. No disrespect, but consulting firms have other competencies. (See my article about McKinsey in today’s Beyond Search stream for a timely example.)
  4. Do the social stimulus thing. The idea is to fix “people’s problems.” That’s a great idea. How are the social programs to deal with the Covid Rona thing working out?

Let’s step back. I want to offer several observations:

First, what the heck is this type of editorial – opinion thing doing in the business section of a major newspaper? What’s wrong with the editorial page or, better yet, the Style section. But business news? Nope.

Second, the use of terms like chaos and reality. The references to the cause of the reality / chaos problem are interesting. In the philosophy class in which I suffered in my sophomore year in college, the failure to define terms would have evoked sharp criticism from the wonderful but mostly off kilter person who taught the course.

Third, the idea that using new words to create categories for people is interesting. Jacques Ellul explains this process in his book Propaganda. Yep, categories work in certain contexts. I won’t explain those contexts. Ellul does a good job of this.

Fourth, the notion that social programs will solve problems is interesting. I wish I could think of a social program which has worked to change people. Readers are invited to remind me so I can fill in the gaps of my knowledge.

Net net: How about business news in the Business section. Put the reality chaos stuff someplace else? Maybe Medium or Substack or on a Gray Lady podcast.

Stephen E Arnold, February 5, 2021

Algolia: Making Search Smarter But Is This Possible?

February 5, 2021

A retail search startup pins its AI hopes on a recent acquisition, we learn from the write-up at SiliconANGLE, “Algolia Acquires MorphL to Embed AI into its Enterprise Search Tech.” The company is using its new purchase to power Algolia AI. The platform predicts searchers’ intent in order to deliver tailored (aka targeted) search results, even on a user’s first interaction with the software. Writer Mike Wheatley tells us:

“Algolia sells a cloud-based search engine that companies can embed in their sites, cloud services and mobile apps via an application programming interface. Online retailers can use the platform to help shoppers browse their product catalogs, for example. Algolia’s technology is also used by websites such as the open publishing platform Medium and the online learning course provider Coursera. Algolia’s enterprise-focused search technology enables companies to create a customized search bar, with tools such as a sidebar so shoppers can quickly filter goods by price, for example. MorphL is a Romanian startup that has created an AI platform for e-commerce personalization that works by predicting how people are likely to interact with a user interface. Its technology will extend Algolia’s search APIs with recommendations and user behavior models that will make it possible for e-commerce websites and apps to deliver more ‘intent-based experiences.’”

The Google Digital News Initiative funded MorphL’s development. The startup began as an open-source project in 2018 and is based in Bucharest, Romania. Headquartered in San Francisco, Algolia was founded in 2012. MorphL is the company’s second acquisition; it plucked SeaUrchin.IO in 2018.

Will Algolia search be smarter, maybe even cognitive? Worth watching to see how many IQ points are added to Algolia’s results.

Cynthia Murrell, February 5, 2021

McKinsey: MBAs Are a Fascinating Group to Observe

February 5, 2021

Watching blue chip consulting firms is more enjoyable than visiting a zoo. Here’s a good example of the entertainment value of individuals who strive to apply logic to business. Logic is definitely good, right?

AP Source: McKinsey to Pay $573M for Role in Opioid Crisis” explains that the McKinsey wizards somehow became involved in the “opioid crisis.” Crisis is self explanatory because most people have been ensnared in the Covid Rona thing. But opioid is difficult to appreciate. Think of addiction, crime, prostitution, trashed families, abandoned children, etc. You get the idea.

How could a blue chip consulting firm become involved in crimes which do not appear in the McKinsey collateral, on its Web site, or in its presentations to potential and current clients?

The write up says in the manner of “real” news outfits:

The global business consulting firm McKinsey & Company has agreed to a $573 million settlement over its role in advising companies on how to “supercharge” opioid sales amid an overdose crisis…

I interpret this to mean that the MBAs used their expertise to incentivize those in the legal pharma chain to move product. “Moving product” is a phrase used by narcotics dealers and MBAs alike, I believe.

The “real” news item reports:

McKinsey provided documents used in legal proceedings regarding OxyContin maker Purdue Pharma, including some that describe its efforts to help the company try to “supercharge” opioid sales in 2013, as reaction to the overdose crisis was taking a toll on prescribing. Documents made public in Purdue proceedings last year include include emails among McKinsey.

A wonderful engagement until it wasn’t. Blue chip consulting firms like to write checks to those who generate billable hours. My understanding is that writing checks for unbillable work irritates partners who expect bonuses and adulation for their business acumen.

An allegation of “supercharging” addictive products and producing the secondary effects itemize by me in paragraph two of this post is a bit of a negative. Even worse, the desired secondary effect like a zippy new Porsche conjured up on the Porsche Car Configurator, a position in a new investment fund, or a nice house and land in New Zealand does not arrive.

No word on jail time, but there’s a new administration now. The prostitution, child abandonment, and crime issues may become more consequential now.

Will this become a Harvard case? Who am I kidding? McKinsey in numero uno. Do los narcotraficantes operate with McKinsey’s acumen, logic, and efficiency. Good question.

Stephen E Arnold, February 5, 2021

Neuroscience To the Rescue if Developers Allow

February 5, 2021

Machine learning has come a long way, but there are still many factors that will confuse an algorithm. Unfortunately, these adversarial examples can be exploited by hackers. The Next Web offers hope for a defense against some of these assaults in, “Here’s How Neuroscience Can Protect AI from Cyber attacks.” As is often the case, the key is to copy Mother Nature. Reporter Ben Dickson writes:

“Creating AI systems that are resilient against adversarial attacks has become an active area of research and a hot topic of discussion at AI conferences. In computer vision, one interesting method to protect deep learning systems against adversarial attacks is to apply findings in neuroscience to close the gap between neural networks and the mammalian vision system. Using this approach, researchers at MIT and MIT-IBM Watson AI Lab have found that directly mapping the features of the mammalian visual cortex onto deep neural networks creates AI systems that are more predictable in their behavior and more robust to adversarial perturbations. In a paper published on the bioRxiv preprint server, the researchers introduce VOneNet, an architecture that combines current deep learning techniques with neuroscience-inspired neural networks. The work, done with help from scientists at the University of Munich, Ludwig Maximilian University, and the University of Augsburg, was accepted at the NeurIPS 2020, one of the prominent annual AI conferences, which will be held virtually this year.”

The article goes on to describe the convolutional neural networks (CNNs) now used in computer vision applications and how they can be fooled. The VOneNet architecture works by swapping out the first few CNN layers for a neural network model based on primates’ primary visual cortex. Researchers found this move proved a strong defense against adversarial attacks. See the piece for the illustrated technical details.

The researchers lament the tendency of AI scientists toward pursuing larger and larger neural networks without slowing down to consider the latest findings of brain mechanisms. Who can be bothered with effectiveness when there is money to be made by hyping scale? We suspect SolarWinds and FireEye, to name a couple, may be ready to think about different approaches to cyber security. Maybe the neuro thing will remediate some skinned knees at these firms? The research team is determined to forge ahead and find more ways to beneficially incorporate biology into deep neural networks. Will AI developers take heed?

Cynthia Murrell, February 5, 2021

What Is Next for Amazon Netradyne?

February 4, 2021

I noted the “real” news outfit CNBC story “Amazon Is Using AI-Equipped Cameras in Delivery Vans and Some Drivers Are Concerned about Privacy.” The use case is monitoring drivers. I have heard that some drivers work like beavers. Other comments suggest that some drivers play fast and loose with their time. These are lazy beavers. Other drivers misplace packages. These are crafty beavers. Another group driver like the route through the subdivision is a race. These are thrill-loving beavers. The Netradyne Driveri gizmo provides a partial solution with benefits; for example, imagery. My thought is that the Netradyne gizmo can hook into the Amazon AWS mother ship for a range of interesting features and functions. Maybe the data would be of use to those engaged in Amazon’s public sector work; for example, policeware services and solutions?

The story states:

Amazon is using an AI-powered camera made by Netradyne, a San Diego-based start-up that was founded in 2015 by two former senior Qualcomm employees. The camera, called Driveri, has four lenses that capture the road, the driver, and both sides of the vehicle.

I want to step away from the Netradyne and ask a few questions to which I don’t have answers at this time:

  1. Will Amazon learn from the Netradyne deployment what product enhancements to include in the “son of Netradyne”?
  2. What if a vehicle is equipped with multiple Netradyne type devices and shares these data with Amazon’s public sector partners and customers?
  3. What if Amazon’s drone routing surveillance technology is adapted to function with Amazon delivery mechanisms; that is, robot carts, lockers at the local store, trunk centric delivery, Ring doorbells, etc.?

The drivers are the subjects of a Silicon Valley style A-B test. My hunch is that there will be further smart camera developments either by AWS itself, AWS and a partner, or a few startups taking advantage of AWS technology to provide a platform for an application of the Netradyne learnings.

Who competes with Amazon AWS in this sector? Google, Microsoft, got any ideas? Sure, you do.

Stephen E Arnold, February 4, 2021

Managing Engineers: Make High School Science Club Management Methods More High School-Like?

February 4, 2021

I read an interesting and thoughtful essay in Okay HQ. “Engineering Productivity Can Be Measured – Just Not How You’d Expect.” The “you” seems to be me. That’s okay. As a student of the brilliant HSSCMM encapsulated in decisions related to handling staff, I am fascinated by innovations.

The write up points out:

Since the advent of the software industry, most engineering teams have seen productivity as a black box. Only recently have people even begun to build internal tools that optimize performance. Unfortunately, most of these tools measure the wrong metrics and are shockingly similar across companies.

The idea is that MBA like measures are off the mark.

How does the HSSCMM get back on track? The write up states:

Productivity in engineering therefore naturally increases when you remove the blockers getting in the way of your team.

The idea of a “blocker” is a way to encapsulate the ineffective, inefficient, and clumsy management tactics touted by Peter Drucker and other management experts.

What does a member of the science club perceive as a blocker?

  • Too many interruptions
  • Slow code reviews
  • Lousy development tools
  • Too much context switching (seems like a variant of interruptions, doesn’t it?)
  • Getting pinged to do work outside of business hours (yep, another variation of interrupting a science club member).

Let’s summarize my HSSCMM principles. The engineers — at least the ones in the elite of the science club — want to be managed by these precepts:

  • Don’t interrupt the productive engineers/professionals
  • Don’t give us tools the productive / engineers and professionals don’t find useful, helpful, good, or up to our standards
  • Provide feedback, right now, you inefficient and unproductive human
  • Don’t annoy productive engineers / professionals outside of “work” hours.

These seem perfectly reasonable if somewhat redundant. However, these productive engineers / professionals have created the systems, methods, apps, and conventions that destroy attention, yield lousy software and tools, and nourish the mind set which has delivered the joys of Twitter, Facebook, Robinhood, et al to the world.

Got that, Druckerites? If not, our innovations in artificial intelligence will predict your behaviors and our neuro morphic systems will make you follow the precepts of the science club.

That sound about right?

Stephen E Arnold, February 4, 2021

Facebook Algorithms: Pernicious, Careless, Indifferent, or No Big Deal?

February 4, 2021

What is good for the social media platform is not necessarily good for its users. Or society. The Startup examines the “Facebook AI Algorithm: One of the Most Destructive Technologies Ever Invented.” Facebook’s AI is marketed as a way to give users more of what they want to see and that it is—to a point. We suspect most users would like to avoid misinformation, but if it will keep eyeballs on the platform Facebook serves up fake news alongside (or instead of) reputable content. Its algorithms are designed to serve its interests, not ours. Considering Facebook has become the primary source of news in the U.S., this feature (not a bug) is now a real problem for society. Writer David Meerman Scott observes:

“The Facebook Artificial Intelligence-powered algorithm is designed to suck users into the content that interests them the most. The technology is tuned to serve up more and more of what you click on, be that yoga, camping, Manchester United, or K-pop. That sounds great, right? However, the Facebook algorithm also leads tens of millions of its 2.7 billion global users into an abyss of misinformation, a quagmire of lies, and a quicksand of conspiracy theories.”

As we have seen, such conspiracy theories can lead to dire real-world consequences. All because Facebook (and other social media platforms) lead users down personalized rabbit holes for increased ad revenue. Sites respond to criticism by banning some content, but the efforts are proving to be inadequate. Scott suggests the only real solution is to adjust the algorithms themselves to avoid displaying misinformation in the first place. Since this will mean losing money, though, Facebook is unlikely to do so without being forced to by regulators, advertisers, or its employees.

The Next Web looks at how these algorithms work in, “Here’s How AI Determines What You See on the Facebook News Feed.” Reporter Thomas Macaulay writes:

“The ranking system first collects candidate posts for each user, including those shared by their friends, Groups, or Pages since their last login. It then gives each post a score based on a variety of factors, such as who shared the content and how it matches with what the user generally interacts with. Next, a lightweight model narrows the pool of candidates down to a shortlist. This allows more powerful neural networks to give each remaining post a score that determines the order in which they’re placed. Finally, the system adds contextual features like diversity rules to ensure that the News Feed has a variety of content. The entire process is complete in the time it takes to open the Facebook app.”

Given recent events, it is crucial Facebook and other platforms modify their AI asap. What will it take?

Cynthia Murrell, February 4, 2021

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta